The perfect link - The A11Y Collective
How do we write, design, and code a link that works for everyone on every device? Let’s dive into the world of creating the perfect link, without making a pig’s breakfast of it.
How do we write, design, and code a link that works for everyone on every device? Let’s dive into the world of creating the perfect link, without making a pig’s breakfast of it.
This is a handy guideline to remember, even if there exceptions:
When a keyboard user follows a link, their focus should be taken to the new place; when a keyboard user presses a button, focus should remain on that button.
It gives me warm fuzzies to see an indie web building block like rel="me"
getting coverage like this.
I’ve always liked the idea that your website can be your API. Like, you’ve already got URLs to identify resources, so why not make that URL structure predictable and those resources parsable?
That’s why the (read-only) API for The Session doesn’t live at a separate subdomain. It uses the same URL structure as the regular site, but you can request the resources in an alternative format: JSON, XML, RSS.
This works out pretty well, mostly because I put a lot of thought into the URL structure of the site. I’m something of a URL fetishist, but I think that taking a URL-first approach to information architecture can be a good exercise.
Most of the resources on The Session involve nouns like tunes, events, discussions, and so on. There’s a consistent and predictable structure to the URLs for those sections:
/things
/things/new
/things/search
And then an idividual item can be found at:
things/ID
That’s all nice and predictable and the naming of the URLs matches what you’d expect to find:
Tunes, events, discussions, sessions. Those are all fine. But there’s one section of the site that has this root URL:
When I was coming up with the URL structure twenty years ago, it was clear what you’d find there: track listings for albums of music. No one would’ve expected to find actual recordings of music available to listen to on-demand. The bandwidth constraints and technical limitations of the time made that clear.
Two decades on, the situation has changed. Now someone new to the site might well expect to hit a link called “recordings” and expect to hear actual recordings of music.
So I should probably change the label on the link. I don’t think “albums” is quite right—what even is an album any more? The word “discography” is probably the most appropriate label.
Here’s my dilemma: if I update the label, should I also update the URL structure?
Right now, the section of the site with /tunes
URLs is labelled “tunes”. The section of the site with /events
URLs is labelled “events”. Currently the section of the site with /recordings
URLs is labelled “recordings”, but may soon be labelled “discography”.
If you click on “tunes”, you end up at /tunes
. But if you click on “discography”, you end up at /recordings
.
Is that okay? Am I the only one that would be bothered by that?
I could update the URLs to match the labelling (with redirects for the old URLs, of course), but I’m not so keen on this URL structure:
/discography
/discography/new
/discography/search
/discography/ID
It doesn’t seem as tidy as:
/recordings
/recordings/new
/recordings/search
/recordings/ID
But if I don’t update the URLs to match the label, then I’m just going to have to live with the mismatch.
I’m just thinking out loud here. I think I should definitely update the label. I just won’t make any decision on changing URLs for a while yet.
At the end of next week, I will sally forth to California. I’m going to wend my way to San Francisco where I will be speaking at An Event Apart.
I am very much looking forward to speaking at my first in-person AEAs in exactly three years. That was also in San Francisco, right before The Situation.
I hope to see you there. There are still tickets available.
I’ve put together a brand new talk that I’m very excited about. I’ve already written about the prep for this talk:
So while I’ve been feeling somewhat under the gun as I’ve been preparing this new talk for An Event Apart, I’ve also been feeling that the talk is just the culmination; a way of tying together some stuff I’ve been writing about it here for the past year or two.
The talk is called Declarative Design. Here’s the blurb:
Different browsers, different devices, different network speeds…designing for the web can feel like a never-ending battle for control. But what if the solution is to relinquish control? Instead of battling the unknowns, we can lean into them. In the world of programming, there’s the idea of declarative languages: describing what you want to achieve without specifying the exact steps to get there. In this talk, we’ll take this concept of declarative programming and apply it to designing for the web. Instead of focusing on controlling the outputs of the design process, we’ll look at creating the right inputs instead. Leave the final calculations for the outputs to the browser—that’s what computers are good at. We’ll look at CSS features, design systems, design principles, and more. Then you’ll be ready to embrace the fluid, ever-changing, glorious messiness of the World Wide Web!
If you’d a glimpse into the inside of my head while I’ve been preparing this talk, here’s a linkdump of various resources that are either mentioned in the talk or influenced it…
If you’re thinking of signing up to Hive or Post:
If posts in a social media app do not have URLs that can be linked to and viewed in an unauthenticated browser, or if there is no way to make a new post from a browser, then that program is not a part of the World Wide Web in any meaningful way.
Consign that app to oblivion.
I really like this experiment that Jim is conducting on his own site. I might try to replicate it sometime!
My website has different themes you can choose from. I don’t just mean a dark mode. These themes all look very different from one another.
I assume that 99.99% of people just see the default theme, but I keep the others around anyway. Offering different themes was originally intended as a way of showcasing the power of CSS, and specifically the separation of concerns between structure and presentation. I started doing this before the CSS Zen Garden was created. Dave really took it to the next level by showing how the same HTML document could be styled in an infinite number of ways.
Each theme has its own stylesheet. I’ve got a very simple little style switcher on every page of my site. Selecting a different theme triggers a page refresh with the new styles applied and sets a cookie to remember your preference.
I also list out the available stylesheets in the head
of every page using link
elements that have rel
values of alternate
and stylesheet
together. Each link
element also has a title
attribute with the name of the theme. That’s the standard way to specify alternative stylesheets.
In Firefox you can switch between the specified stylesheets from the View
menu by selecting Page Style
(notice that there’s also a No style
option—very handy for checking your document structure).
Other browsers like Chrome and Safari don’t do anything with the alternative stylesheets. But they don’t ignore them.
Every browser makes a network request for each alternative stylesheet. The request is non-blocking and seems to be low priority, which is good, but I’m somewhat perplexed by the network request being made at all.
I get why Firefox is requesting those stylesheets. It’s similar to requesting a print stylesheet. Even if the network were to drop, you still want those styles available to the user.
But I can’t think of any reason why Chrome or Safari would download the alternative stylesheets.
I’m very excited about speaking at CSS Day this year. My talk is called In And Out Of Style:
It’s an exciting time for CSS! It feels like new features are being added every day. And yet, through it all, CSS has managed to remain an accessible language for anyone making websites. Is this an inevitable part of the design of CSS? Or has CSS been formed by chance? Let’s take a look at the history—and some alternative histories—of the World Wide Web to better understand where we are today. And then, let’s cast our gaze to the future!
Technically, CSS Day won’t be the first outing for this talk but it will be the in-person debut. I had the chance to give the talk online last week at An Event Apart. Giving a talk online isn’t quite the same as speaking on stage, but I got enough feedback from the attendees that I’m feeling confident about giving the talk in Amsterdam. It went down well with the audience at An Event Apart.
If the description has you intrigued, come along to CSS Day to hear the talk in person. And if you like the subject matter, I’ve put together these links to go with the talk…
“Be linkable and accessible to any client” is a provocative test for whether something is “of the web”.
Following on from my recently-lost long bet, this is a timely bit of data spelunking from Brian analysing the linkrot of 1400 links over 18 years of time.
Dave recently wrote some good advice about what to do—and what not to do—when it comes to complaining about web browsers. I wrote something on this topic a little while back:
If there’s something about a web browser that you’re not happy with (or, indeed, if there’s something you’re really happy with), take the time to write it down and publish it
To summarise Dave’s advice, avoid conspiracy theories and snark; stick to specifics instead.
It’s very good advice that I should heed (especially the bit about avoiding snark). In that spirit, I’d like to document what I think is a bug on iOS.
I don’t need to name the specific browser, because there is basically only one browser allowed on iOS. That’s not snark; that’s a statement of fact.
This bug involves navigating from a progressive web app that has been installed on your home screen to an external web view.
To illustrate the bug, I’ll use the example of The Session. If you want to recreate the bug, you’ll need to have an account on The Session. Let me know if you want to set up a temporary account—I can take care of deleting it afterwards.
Here are the steps:
Expected behaviour: you are returned to the page you were on with no change of state.
Actual behaviour: you are returned to the page you were on but you are logged out.
So the act of visiting an external link in a web view while in a progressive web app in standalone mode seems to cause a loss of cookie-based authentication.
This isn’t permanent. Clicking on any internal link restores the logged-in state.
It is surprising though. My mental model for opening an external link in a web view is that it sits “above” the progressive web app, which remains in stasis “behind” it. But the page must actually be reloading, either when the web view is opened or when the web view is closed. And that reload is behaving like a fetch event without credentials.
Anyway, that’s my bug report. It may already be listed somewhere on the WebKit Bugzilla but I lack the deductive skills to find it. I’m not even sure if that’s the right place for this kind of bug. It might be specific to the operating system rather than the rendering engine.
This isn’t a high priority bug, but it is one of those cumulatively annoying software paper cuts.
Hope this helps!
Don’t see making your own web page as a nostalgia, don’t participate in creating the netstalgia trend. What you make is a statement, an act of emancipation. You make it to continue a 25-year-old tradition of liberation.
It’s a bit of a cliché to talk about living in the future. It’s also a bit pointless. After all, any moment after the big bang is a future when viewed from any point in time before it.
Still, it’s kind of fun when a sci-fi date rolls around. Like in 2015 when we reached the time depicted in Back To The Future 2, or in 2019 when we reached the time of Blade Runner.
In 2022 we are living in the future of web standards. Again, technically, we’re always living in the future of any past discussion of web standards, but this year is significant …in a very insignificant way.
It all goes back to 2008 and an interview with Hixie, editor of the HTML5 spec at the WHATWG at the time. In it, he mentioned the date 2022 as the milestone for having two completely interoperable implementations.
The far more important—and ambitious—date was 2012, when HTML5 was supposed to become a Candidate Recommendation, which is standards-speak for done’n’dusted.
But the mere mention of the year 2022 back in the year 2008 was too much for some people. Jeff Croft, for example, completely lost his shit (Jeff had a habit of posting angry rants and then denying that he was angry or ranty, but merely having a bit of fun).
The whole thing was a big misunderstanding and soon irrelevant: talk of 2022 was dropped from HTML5 discussions. But for a while there, it was fascinating to see web designers and developers contemplate a year that seemed ludicriously futuristic. Jeff wrote:
God knows where I’ll be in 13 years. Quite frankly, I’ll be pretty fucking disappointed in myself (and our entire industry) if I’m writing HTML in 13 years.
That always struck me as odd. If I thought like that, I’d wonder what the point would be in making anything on the web to begin with (bear in mind that both my own personal website and The Session are now entering their third decade of life).
I had a different reaction to Jeff, as I wrote in 2010:
Many web developers were disgusted that such a seemingly far-off date was even being mentioned. My reaction was the opposite. I began to pay attention to HTML5.
But Jeff was far from alone. Scott Gilbertson wrote an angry article on Webmonkey:
If you’re thinking that planning how the web will look and work 13 years from now is a little bit ridiculous, you’re not alone.
Even if your 2022 ronc-o-matic web-enabled toaster (It slices! It dices! It browses! It arouses!) does ship with Firefox v22.3, will HTML still be the dominant language of web? Given that no one can really answer that question, does it make sense to propose a standard so far in the future?
(I’m re-reading that article in the current version of Firefox: 95.0.2.)
Brian Veloso wrote on his site:
Two-thousand-twenty-two. That’s 14 years from now. Can any of us think that far? Wouldn’t our robot overlords, whether you welcome them or not, have taken over by then? Will the internet even matter then?
From the comments on Jeff’s post, there’s Corey Dutson:
2022: God knows what the Internet will look like at that point. Will we even have websites?
Dan Rubin, who has indeed successfully moved from web work to photography, wrote:
I certainly don’t intend to be doing “web work” by that time. I’m very curious to see where the web actually is in 14 years, though I can’t imagine that HTML5 will even get that far; it’ll all be obsolete before 2022.
Joshua Works made a prediction that’s worryingly close to reality:
I’ll be surprised if website-as-HTML is still the preferred method for moving around the tons of data we create, especially in the manner that could have been predicted in 2003 or even today. Hell, iPods will be over 20 years old by then and if everything’s not run as an iPhone App, then something went wrong.
Someone with the moniker Grand Caveman wrote:
In 2022 I’ll be 34, and hopefully the internet will be obsolete by then.
Perhaps the most level-headed observation came from Jonny Axelsson:
The world in 2022 will be pretty much like the world in 2009.
The world in 2009 is pretty much like 1996 which was pretty much like the world in 1983 which was pretty much like the world in 1970. Some changes are fairly sudden, others are slow, some are dramatic, others subtle, but as a whole “pretty much the same” covers it.
The Web in 2022 will not be dramatically different from the Web in 2009. It will be less hot and it will be less cool. The Web is a project, and as it succeeds it will fade out of our attention and into the background. We don’t care about things when they work.
Now that’s a sensible perspective!
So who else is looking forward to seeing what the World Wide Web is like in 2036?
I must remember to write a blog post then and link back to this one. I have no intention of trying to predict the future, but I’m willing to bet that hyperlinks will still be around in 14 years.
We invite software developers to do their part, by
- ensuring their users can conveniently obtain a link to the currently open or selected resource via a user interface; and
- providing an application programming interface (API) to obtain or construct a link to that resource (i.e., to get its address and name).
Internet users use fewer different websites today than they did 20 years ago, and spend most of their “Web” time in app versions of websites (which often provide a better experience only because site owners strategically make it so to increase their lock-in and data harvesting potential). Truly exploring the Web now requires extra effort, like exercising an underused muscle. And if you begin and end your Web experience on just one to three services, that just feels kind of… sad, to me. Wasted potential.
The Clearleft newsletter goes out every two weeks on a Thursday. You can peruse the archive to see past editions.
I think it’s a really good newsletter, but then again, I’m the one who writes it. It just kind of worked out that way. In theory, anyone at Clearleft could write an edition of the newsletter.
To make that prospect less intimidating, I put together a document for my colleagues describing how I go about creating a new edition of the newsletter. Then I thought it might be interesting for other people outside of Clearleft to get a peek at how the sausage is made.
So here’s what I wrote…
The description of the newsletter is:
A round-up of handpicked hyperlinks from Clearleft, covering design, technology, and culture.
It usually has three links (maybe four, tops) on a single topic.
The topic can be anything that’s interesting, especially if it’s related to design or technology. Every now and then the topic can be something that incorporates an item that’s specifically Clearleft-related (a case study, an event, a job opening). In general it’s not very salesy at all so people will tolerate the occasional plug.
You can use the “iiiinteresting” Slack channel to find potential topics of interest. I’ve gotten in the habit of popping potential newsletter fodder in there, and then adding related links in a thread.
Imagine you’re telling a friend about something cool you’ve just discovered. You can sound excited. Don’t worry about this looking unprofessional—it’s better to come across as enthusiastic than too robotic. You can put real feelings on display: anger, disappointment, happiness.
That said, you can also just stick to the facts and describe each link in turn, letting the content speak for itself.
If you’re expressing a feeling or an opinion, use the personal pronoun “I”. Don’t use “we” unless you’re specifically referring to Clearleft.
But most of the time, you won’t be using any pronouns at all:
So-and-so has written an article in such-and-such magazine on this-particular-topic.
You might find it useful to have connecting phrases as you move from link to link:
Speaking of some-specific-thing, this-other-person has a different viewpoint.
or
On the subject of this-particular-topic, so-and-so wrote something about this a while back.
The format of the newsletter is:
Take a look through the archive of previous newsletters to get a feel for it.
Currently the newsletter is called dConstruct from Clearleft. The subject line of every edition is in the format:
dConstruct from Clearleft — Title of the edition
(Note that’s an em dash with a space on either side of it separating the name of the newsletter and the title of the edition)
I often try to come up with a pun-based title (often a punny portmanteau) but that’s not necessary. It should be nice and short though: just one or two words.
A wonderful bit of spelunking into the annals of software interfaces by Elise Blanchard.
When I post a link, I do it for two reasons.
First of all, it’s me pointing at something and saying “Check this out!”
Secondly, it’s a way for me to stash something away that I might want to return to. I tag all my links so when I need to find one again, I just need to think “Now what would past me have tagged it with?” Then I type the appropriate URL: adactio.com/links/tags/whatever
There are some links that I return to again and again.
Back in 2008, I linked to a document called A Few Notes on The Culture. It’s a copy of a post by Iain M Banks to a newsgroup back in 1994.
Alas, that link is dead. Linkrot, innit?
But in 2013 I linked to the same document on a different domain. That link still works even though I believe it was first published around twenty(!) years ago (view source for some pre-CSS markup nostalgia).
Anyway, A Few Notes On The Culture is a fascinating look at the world-building of Iain M Banks’s Culture novels. He talks about the in-world engineering, education, biology, and belief system of his imagined utopia. The part that sticks in my mind is when he talks about economics:
Let me state here a personal conviction that appears, right now, to be profoundly unfashionable; which is that a planned economy can be more productive - and more morally desirable - than one left to market forces.
The market is a good example of evolution in action; the try-everything-and-see-what-works approach. This might provide a perfectly morally satisfactory resource-management system so long as there was absolutely no question of any sentient creature ever being treated purely as one of those resources. The market, for all its (profoundly inelegant) complexities, remains a crude and essentially blind system, and is — without the sort of drastic amendments liable to cripple the economic efficacy which is its greatest claimed asset — intrinsically incapable of distinguishing between simple non-use of matter resulting from processal superfluity and the acute, prolonged and wide-spread suffering of conscious beings.
It is, arguably, in the elevation of this profoundly mechanistic (and in that sense perversely innocent) system to a position above all other moral, philosophical and political values and considerations that humankind displays most convincingly both its present intellectual immaturity and — through grossly pursued selfishness rather than the applied hatred of others — a kind of synthetic evil.
Those three paragraphs might be the most succinct critique of unfettered capitalism I’ve come across. The invisible hand as a paperclip maximiser.
Like I said, it’s a fascinating document. In fact I realised that I should probably store a copy of it for myself.
I have a section of my site called “extras” where I dump miscellaneous stuff. Most of it is unlinked. It’s mostly for my own benefit. That’s where I’ve put my copy of A Few Notes On The Culture.
Here’s a funny thing …for all the times that I’ve revisited the link, I never knew anything about the site is was hosted on—vavatch.co.uk
—so this most recent time, I did a bit of clicking around. Clearly it’s the personal website of a sci-fi-loving college student from the early 2000s. But what came as a revelation to me was that the site belonged to …Adrian Hon!
I’m impressed that he kept his old website up even after moving over to the domain mssv.net
, founding Six To Start, and writing A History Of The Future In 100 Objects. That’s a great snackable book, by the way. Well worth a read.
My last long-distance trip before we were all grounded by The Situation was to San Francisco at the end of 2019. I attended Indie Web Camp while I was there, which gave me the opportunity to add a little something to my website: an “on this day” page.
I’m glad I did. While it’s probably of little interest to anyone else, I enjoy scrolling back to see how the same date unfolded over the years.
’Sfunny, when I look back at older journal entries they’re often written out of frustration, usually when something in the dev world is bugging me. But when I look back at all the links I’ve bookmarked the vibe is much more enthusiastic, like I’m excitedly pointing at something and saying “Check this out!” I feel like sentiment analyses of those two sections of my site would yield two different results.
But when I scroll down through my “on this day” page, it also feels like descending deeper into the dark waters of linkrot. For each year back in time, the probability of a link still working decreases until there’s nothing but decay.
Sadly this is nothing new. I’ve been lamenting the state of digital preservation for years now. More recently Jonathan Zittrain penned an article in The Atlantic on the topic:
Too much has been lost already. The glue that holds humanity’s knowledge together is coming undone.
In one sense, linkrot is the price we pay for the web’s particular system of hypertext. We don’t have two-way linking, which means there’s no centralised repository of links which would be prohibitively complex to maintain. So when you want to link to something on the web, you just do it. An a
element with an href
attribute. That’s it. You don’t need to check with the owner of the resource you’re linking to. You don’t need to check with anyone. You have complete freedom to link to any URL you want to.
But it’s that same simple system that makes the act of linking a gamble. If the URL you’ve linked to goes away, you’ll have no way of knowing.
As I scroll down my “on this day” page, I come across more and more dead links that have been snapped off from the fabric of the web.
If I stop and think about it, it can get quite dispiriting. Why bother making hyperlinks at all? It’s only a matter of time until those links break.
And yet I still keep linking. I still keep pointing to things and saying “Check this out!” even though I know that over a long enough timescale, there’s little chance that the link will hold.
In a sense, every hyperlink on the World Wide Web is little act of hope. Even though I know that when I link to something, it probably won’t last, I still harbour that hope.
If hyperlinks are built on hope, and the web is made of hyperlinks, then in a way, the World Wide Web is quite literally made out of hope.
I like that.