
Creepy potato.
5th | 10th | 15th | 20th | 25th | 30th | ||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12am | |||||||||||||||||||||||||||||||
4am | |||||||||||||||||||||||||||||||
8am | |||||||||||||||||||||||||||||||
12pm | |||||||||||||||||||||||||||||||
4pm | |||||||||||||||||||||||||||||||
8pm |
Creepy potato.
No, no, that was totally down to me reading in haste.
Ah, right, I see now that you said replacing with an actual button
is better than adding a role
of “button” to a link—that makes sense! So if JavaScript replaces the links with buttons, I may be on my way to covering both scenarios.
Yes, better for screen reader support where the JavaScript executes, but not so good for any situations—screen reader or otherwise—where JavaScript is unavailable (a link would still work as link).
I wish I could handle both scenarios.
If JavaScript add a role
of “button” to the link, would that deal with the expectation issue?
(That would still allow the link to be a fallback for non-JS scenarios.)
Yes! As soon as you add preventDefault()
to an event listener, you’re signing up to handle all the responsibilities that the browser would usually take care of.
Then make them real pages.
Completely agree with you there!
Big modals that are basically fake web pages—even if coded accessibly—feel deceptive, lacking in material honesty.
If it’s true that screen reader users expect all links to go to a new page, then are regular internal page links (that use IDs) an anti-pattern to be avoided?
e.g. Wikipedia articles with a table of contents. Fragment identifier URLs.
A simple, real-time website scanner to see what invisible creepers are lurking in the shadows and collecting information about you.
Looks good for adactio.com, thesession.org, and huffduffer.com …but clearleft.com is letting the side down.
I posted something recently that I think might be categorised as a “shitpost”:
Most single page apps are just giant carousels.
Extreme, yes, but perhaps there’s a nugget of truth to it. And it seemed to resonate:
I’ve never actually seen anybody justify SPA transitions with actual business data. They generally don’t seem to increase sales, conversion, or retention.
For some reason, for SPAs, managers are all of a sudden allowed to make purely emotional arguments: “it feels snappier”
If businesses were run rationally, when somebody asks for an order of magnitude increase in project complexity, the onus would be on them to prove that it proportionally improves business results.
But I’ve never actually seen that happen in a software business.
A single page app architecture makes a lot of sense for interaction-heavy sites with lots of state to maintain, like twitter.com. But I’ve seen plenty of sites built as single page apps even though there’s little to no interactivity or state management. For some people, it’s the default way of building anything on the web, even a brochureware site.
It seems like there’s a consensus that single page apps may have long initial loading times, but then they have quick transitions between “pages” …just like a carousel really. But I don’t know if that consensus is based on reality. Whether you’re loading a page of HTML or loading a chunk of JSON, you’re still making a network request that will take time to resolve.
The argument for loading a chunk of JSON is that you don’t have to make any requests for the associated CSS and JavaScript—they’re already loaded. Whereas if you request a page of HTML, that HTML will also request CSS and JavaScript.
Leaving aside the fact that is literally what the browser cache takes of, I’ve seen some circular reasoning around this:
To be fair, in the past, the experience of going from page to page used to feel a little herky-jerky, even if the response times were quick. You’d get a flash of a white blank page between navigations. But that’s no longer the case. Browsers now perform something called “paint holding” which elimates the herky-jerkiness.
So now if your pages are a reasonable size, there’s no practical difference in user experience between full page refreshes and single page app updates. Navigate around The Session if you want to see paint holding in action. Switching to a single page app architecture wouldn’t improve the user experience one jot.
Except…
If I were controlling everything with JavaScript, then I’d also have control over how to transition between the “pages” (or carousel items, if you prefer). There’s currently no way to do that with full page changes.
This is the problem that Jake set out to address in his proposal for navigation transitions a few years back:
Having to reimplement navigation for a simple transition is a bit much, often leading developers to use large frameworks where they could otherwise be avoided. This proposal provides a low-level way to create transitions while maintaining regular browser navigation.
I love this proposal. It focuses on user needs. It also asks why people reach for JavaScript frameworks instead of using what browsers provide. People reach for JavaScript frameworks because browsers don’t yet provide some functionality: components like tabs or accordions; DOM diffing; control over styling complex form elements; navigation transitions. The problems that JavaScript frameworks are solving today should be seen as the R&D departments for web standards of tomorrow. (And conversely, I strongly believe that the aim of any good JavaScript framework should be to make itself redundant.)
I linked to Jake’s excellent proposal in my shitpost saying:
bucketloads of JavaScript wouldn’t be needed if navigation transitions were available in browsers
But then I added—and I almost didn’t—this:
(not portals)
Now you might be asking yourself what Paul said out loud:
Excuse my ignorance but… WTF are portals!?
I replied with a link to the portals proposal and what I thought was an example use case:
Portals are a proposal from Google that would help their AMP use case (it would allow a web page to be pre-rendered, kind of like an iframe).
That was based on my reading of the proposal:
…show another page as an inset, and then activate it to perform a seamless transition to a new state, where the formerly-inset page becomes the top-level document.
It sounded like Google’s top stories carousel. And the proposal goes into a lot of detail around managing cross-origin requests. Again, that strikes me as something that would be more useful for a search engine than a single page app.
But Jake was not happy with my description. I didn’t intend to besmirch portals by mentioning Google AMP in the same sentence, but I can see how the transitive property of ickiness would apply. Because Google AMP is a nasty monopolistic project that harms the web and is an embarrassment to many open web advocates within Google, drawing any kind of comparison to AMP is kind of like Godwin’s Law for web stuff. I know that makes it sounds like I’m comparing Google AMP to Hitler, and just to be clear, I’m not (though I have myself been called a fascist by one of the lead engineers on AMP).
Clearly, emotions run high when Google AMP is involved. I regret summoning its demonic presence.
After chatting with Jake some more, I tried to find a better use case to describe portals. Reading the proposal, portals sound a lot like “spicy iframes”. So here’s a different use case that I ran past Jake: say you’re on a website that has an iframe embedded in it—like a YouTube video, for example. With portals, you’d have the ability to transition the iframe to a fully-fledged page smoothly.
But Jake told me that even though the proposal talks a lot about iframes and cross-origin security, portals are conceptually more like using rel="prerender"
…but then having scripting control over how the pre-rendered page becomes the current page.
Put like that, portals sound more like Jake’s original navigation transitions proposal. But I have to say, I never would’ve understood that use case just from reading the portals proposal. I get that the proposal is aimed more at implementators than authors, but in its current form, it doesn’t seem to address the use case of single page apps.
we haven’t seen interest from SPA folks in portals so far.
I’m not surprised! He goes on:
Maybe, they are happy / benefits aren’t clear yet.
From my own reading of the portals proposal, I think the benefits are definitely not clear. It’s almost like the opposite of Jake’s original proposal for navigation transitions. Whereas as that was grounded in user needs and real-world examples, the portals proposal seems to have jumped to the intricacies of implementation without covering the user needs.
Don’t get me wrong: if portals somehow end up leading to a solution more like Jake’s navigation transitions proposals, then I’m all for that. That’s the end result I care about. I’d love it if people had a lightweight option for getting the perceived benefits of single page apps without the costly overhead in performance that comes with JavaScripting all the things.
I guess the web I want includes giant carousels.
There’s the story of T.E. Lawrence losing the first manuscript of The Seven Pillars Of Wisdom on a train …though it’s more likely that the story is his version of “the dog ate my homework” because he didn’t like what he’d written.
Well, this is a weird example but look at the output of this XML https://thesession.org/tunes/new?format=xml with and without the extension enabled. With the extension, you can see the JavaScript dumped to the screen.
Ah, interesting! I had that installed until very recently too: I had to disable it when I discovered it was inserting JavaScript into every response (making debugging very difficult). We should tell the good folks at @DuckDuckGo.
It loads for me: Firefox 82.0.1 on Mac.
Do you think maybe a browser extension might be the culprit? (I speak from bitter hair-pulling experience.)
I brainstormed that for a bit:
https://github.com/w3c/web-share/issues/176#issuecomment-694090749
I’m going to apologise to Roy Fielding for even thinking it.
Can’t beat a good vinaigrette:
Sara tweeted something recently that resonated with me:
Also, Pro Tip: Using ARIA attributes as CSS hooks ensures your component will only look (and/or function) properly if said attributes are used in the HTML, which, in turn, ensures that they will always be added (otherwise, the component will obv. be broken)
Yes! I didn’t mention it when I wrote about accessible interactions but this is my preferred way of hooking up CSS and JavaScript interactions. Here’s old Codepen where you can see it in action:
[aria-hidden='true'] {
display: none;
}
In order for the functionality to work for everyone—screen reader users or not—I have to make sure that I’m toggling the value of aria-hidden
in my JavaScript.
There’s another advantage to this technique. Generally, ARIA attributes—like aria-hidden
—are added by JavaScript at runtime (rather than being hard-coded in the HTML). If something goes wrong with the JavaScript, the aria-hidden
value isn’t set to “true”, which means that the CSS never kicks in. So the default state is for content to be displayed. There’s no assumption that the JavaScript has to work in order for the CSS to make sense.
It’s almost as though accessibility and progressive enhancement are connected somehow…
I like the fallback you get with a link (assuming it’s using a valid fragment identifier)—if anything goes screwy with the JavaScript, the link still works.
I’d be interested in getting your take on the logic I’m using here: https://adactio.com/journal/17546
…generally you can’t go wrong with a button. … That said, I think that links can also make sense in certain situations.
Eating toast (with marmite).
I’m an agent of the 28th Amendment, the abolition of the 2nd. If it sounds sanctimonious to trace my authority to a decade-old government document that I have never read rather than my employee handbook, it’s only because I value my life.
Here are 8 original visions of Africanfuturism: science fiction stories by both emerging and seasoned African writers staking a claim to Africa’s place in the future. These are powerful visions focused on the African experience and hopes and fears, exploring African sciences, philosophies, adaptations to technology and visions of the future both centred on and spiralling out of Africa. You will find stories of the near and almost-present future, tales set on strange and wonderful new planets, stories of a changed Earth, stories that dazzle the imagination and stimulate the mind. Stories that capture the essence of what we talk about when we talk about Africanfuturism.
I think this is quite beautiful—no need to view source; the style sheet is already in the document.
Yeah, that’s fair—if I had a time machine, I’d love to go back and make cookies same-origin only.
And JavaScript!
Yeah …spicy!
It’s that emphasis on “between origins” that gets me (though I understand the security concerns, of course). Jake’s original proposal seemed more focused on same-origin page-level transitions …which is most single page apps today.
You’re right. I don’t have any in-depth knowledge here. I was trying to describe a proposal being incubated. I used an example. It was a bad example, I guess.
From now on I’ll just describe portals as “spicy iframes” and leave it at that.
Jake, I’m not saying that if a technology is useful for AMP then it must be bad—see rel=”prerender”, as you say.
I was honestly, genuinely trying to give an example of where portals could be used based on the description in the explainer.
Note that I didn’t say that portals came from AMP; I said they would help the AMP use case.
But I think I must be misunderstanding portals because it sounds to me like it would work great for the AMP top stories carousel.
Apologies. I thought the use-case sounded a lot like AMP’s top stories:
…show another page as an inset, and then activate it to perform a seamless transition to a new state, where the formerly-inset page becomes the top-level document.
Google reCAPTCHAs that will help power their new border wall contract:
“Please select all the squares containing children we’re going to separate from their families and put in cages.”
Don’t get me wrong: it would be great if portals led to navigation transitions, but right now it looks like the focus is more on “like making an iframe go full page” e.g. an item in a news carousel on a search engine.
My description of portals was genuine. I gave a use case (AMP) and a comparison (iframes). I didn’t pass any judgement (although I can see how just mentioning AMP implies ickiness by association).
Portals are a proposal from Google that would help their AMP use case (it would allow a web page to be pre-rendered, kind of like an iframe).
Most single page apps are just giant carousels.
Their bucketloads of JavaScript wouldn’t be needed if navigation transitions were available in browsers:
https://github.com/jakearchibald/navigation-transitions
(not portals)
A very handy community project that documents support for ARIA and native HTML accessibility features in screen readers and browsers.
Accessibility on the web is easy. Accessibility on the web is also hard.
I think it’s one of those 80/20 situations. The most common accessibility problems turn out to be very low-hanging fruit. Take, for example, Holly Tuke’s list of the 5 most annoying website features she faces as a blind person every single day:
- Unlabelled links and buttons
- No image descriptions
- Poor use of headings
- Inaccessible web forms
- Auto-playing audio and video
None of those problems are hard to fix. That’s what I mean when I say that accessibility on the web is easy. As long as you’re providing a logical page structure with sensible headings, associating form fields with labels, and providing alt text for images, you’re at least 80% of the way there (you’re also doing way better than the majority of websites, sadly).
Ah, but that last 20% or so—that’s where things get tricky. Instead of easy-to-follow rules (“Always provide alt text”, “Always label form fields”, “Use sensible heading levels”), you enter an area of uncertainty and doubt where there are no clear answers. Different combinations of screen readers, browsers, and operating systems might yield very different results.
This is the domain of interaction design. Here be dragons. ARIA can help you …but if you overuse its power, it may cause more harm than good.
When I start to feel overwhelmed by this, I find it’s helpful to take a step back. Instead of trying to imagine all the possible permutations of screen readers and browsers, I start with a more straightforward use case: keyboard users. Keyboard users are (usually) a subset of screen reader users.
The pattern that comes up the most is to do with toggling content. I suppose you could categorise this as progressive disclosure, but I’m talking about quite a wide range of patterns:
In each case, there’s some kind of “trigger” that toggles the appearance of a “target”—some chunk of content.
The first question I ask myself is whether the trigger should be a button or a link (at the very least you can narrow it down to that shortlist—you can discount div
s, span
s, and most other elements immediately; use a trigger that’s focusable and interactive by default).
As is so often the case, the answer is “it depends”, but generally you can’t go wrong with a button. It’s an element designed for general-purpose interactivity. It carries the expectation that when it’s activated, something somewhere happens. That’s certainly true in all the examples I’ve listed above.
That said, I think that links can also make sense in certain situations. It’s related to the second question I ask myself: should the target automatically receive focus?
Again, the answer is “it depends”, but here’s the litmus test I give myself: how far away from each other are the trigger and the target?
If the target content is right after the trigger in the DOM, then a button is almost certainly the right element to use for the trigger. And you probably don’t need to automatically focus the target when the trigger is activated: the content already flows nicely.
<button>Trigger Text</button>
<div id="target">
<p>Target content.</p>
</div>
But if the target is far away from the trigger in the DOM, I often find myself using a good old-fashioned hyperlink with a fragment identifier.
<a href="#target">Trigger Text</a>
…
<div id="target">
<p>Target content.</p>
</div>
Let’s say I’ve got a “log in” link in the main navigation. But it doesn’t go to a separate page. The design shows it popping open a modal window. In this case, the markup for the log-in form might be right at the bottom of the page. This is when I think there’s a reasonable argument for using a link. If, for any reason, the JavaScript fails, the link still works. But if the JavaScript executes, then I can hijack that link and show the form in a modal window. I’ll almost certainly want to automatically focus the form when it appears.
The expectation with links (as opposed to buttons) is that you will be taken somewhere. Let’s face it, modal dialogs are like fake web pages so following through on that expectation makes sense in this context.
So I can answer my first two questions:
…by answering a different question:
It’s not a hard and fast rule, but it helps me out when I’m unsure.
At this point I can write some JavaScript to make sure that both keyboard and mouse users can interact with the interactive component. There’ll certainly be an addEventListener()
, some tabindex
action, and maybe a focus()
method.
Now I can start to think about making sure screen reader users aren’t getting left out. At the very least, I can toggle an aria-expanded
attribute on the trigger that corresponds to whether the target is being shown or not. I can also toggle an aria-hidden
attribute on the target.
When the target isn’t being shown:
aria-expanded="false"
,aria-hidden="true"
.When the target is shown:
aria-expanded="true"
,aria-hidden="false"
.There’s also an aria-controls
attribute that allows me to explicitly associate the trigger and the target:
<button aria-controls="target">Trigger Text</button>
<div id="target">
<p>Target content.</p>
</div>
But don’t assume that’s going to help you. As Heydon put it, aria-controls
is poop. Still, Léonie points out that you can still go ahead and use it. Personally, I find it a useful “hook” to use in my JavaScript so I know which target is controlled by which trigger.
Here’s some example code I wrote a while back. And here are some old Codepens I made that use this pattern: one with a button and one with a link. See the difference? In the example with a link, the target automatically receives focus. But in this situation, I’d choose the example with a button because the trigger and target are close to each other in the DOM.
At this point, I’ve probably reached the limits of what can be abstracted into a single trigger/target pattern. Depending on the specific component, there might be much more work to do. If it’s a modal dialog, for example, you’ve got to figure out where to put the focus, how to trap the focus, and figure out where the focus should return to when the modal dialog is closed.
I’ve mostly been talking about websites that have some interactive components. If you’re building a single page app, then pretty much every single interaction needs to be made accessible. Good luck with that. (Pro tip: consider not building a single page app—let the browser do what it has been designed to do.)
Anyway, I hope this little stroll through my thought process is useful. If nothing else, it shows how I attempt to cope with an accessibility landscape that looks daunting and ever-changing. Remember though, the fact that you’re even considering this stuff means you care more than most web developers. And you are not alone. There are smart people out there sharing what they learn. The A11y Project is a great hub for finding resources.
And when it comes to interactive patterns like the trigger/target examples I’ve been talking about, there’s one more question I ask myself: what would Heydon do?
Van11y (for Vanilla-Accessibility) is a collection of accessible scripts for rich interfaces elements, built using progressive enhancement and customisable.
Collusion between three separate services owned by the same company: the Google search engine, the YouTube website, and the Chrome web browser.
Gosh, this kind of information could be really damaging if there were, say, antitrust proceedings initiated.
In the meantime, use Firefox
A devastating deep dive into the hype of blockchain, written by Jesse Frederik and translated by Hannah Kousbroek:
I’ve never seen so much incomprehensible jargon to describe so little. I’ve never seen so much bloated bombast fall so flat on closer inspection. And I’ve never seen so many people searching so hard for a problem to go with their solution.
I’ve been like a dog with a bone the way I’ve been pushing for a declarative option for the Web Share API in the shape of button type=“share”
. It’s been an interesting window into the world of web standards.
The story so far…
That’s the situation currently. The general consensus seems to be that it’s probably too soon to be talking about implementation at this stage—the Web Share API itself is still pretty new—but gathering data to inform future work is good.
In planning for the next TPAC meeting (the big web standards gathering), Marcos summarised the situation like this:
Not blocking: but a proposal was made by @adactio to come up with a declarative solution, but at least two implementers have said that now is not the appropriate time to add such a thing to the spec (we need more implementation experience + and also to see how devs use the API) - but it would be great to see a proposal incubated at the WICG.
Now this where things can get a little confusing because it used to be that if you wanted to incubate a proposal, you’d have to do on Discourse, which is a steaming pile of crap that requires JavaScript in order to put text on a screen. But Šime pointed out that proposals can now be submitted on Github.
So that’s where I’ve submitted my proposal, linking through to the explainer document.
Like I said, I’m not expecting anything to happen anytime soon, but it would be really good to gather as much data as possible around existing usage of the Web Share API. If you’re using it, or you know anyone who’s using it, please, please, please take a moment to provide a quick description. And if you could help spread the word to get that issue in front of as many devs as possible, I’d be very grateful.
(Many thanks to everyone who’s already contributed to that issue—much appreciated!)
Are you saying he should grow a pair of test articles?
Fnarr, and indeed, fnarr.
The upside to being a terrible procrastinator is that certain items on my to-do list, like, say, “build a chatbot”, will—given enough time—literally take care of themselves.
I ultimately feel like it has slowly turned into a fad. I got fooled by the trend, and as a by-product became part of the trend itself.
Yay! Welcome to the indie web!
I feel like there should be a website equivalent of a housewarming party—a homepage-warming party or something!
Playing Rip The Calico (reel) on mandolin:
Vendor prefixes didn’t work. The theory was sound. It was a way of marking CSS and JavaScript features as being experimental. Developers could use the prefixed properties as long as they understood that those features weren’t to be relied upon.
That’s not what happened though. Developers used vendor-prefixed properties as though they were stable. Tutorials were published that basically said “Go ahead and use these vendor-prefixed properties and ship it!” There were even tools that would add the prefixes for you so you didn’t have to type them out for yourself.
Browsers weren’t completely blameless either. Long after features were standardised, they would only be supported in their prefixed form. Apple was and is the worst for this. To this day, if you want to use the clip-path
property in your CSS, you’ll need to duplicate your declaration with -webkit-clip-path
if you want to support Safari. It’s been like that for seven years and counting.
Like capitalism, vendor prefixes were one of those ideas that sounded great in theory but ended up being unworkable in practice.
Still, developers need some way to get their hands on experiment features. But we don’t want browsers to ship experimental features without some kind of safety mechanism.
The current thinking involves something called origin trials. Here’s the explainer from Microsoft Edge and here’s Google Chrome’s explainer:
- Developers are able to register for an experimental feature to be enabled on their origin for a fixed period of time measured in months. In exchange, they provide us their email address and agree to give feedback once the experiment ends.
- Usage of these experiments is constrained to remain below Chrome’s deprecation threshold (< 0.5% of all Chrome page loads) by a system which automatically disables the experiment on all origins if this threshold is exceeded.
I think it works pretty well. If you’re really interested in kicking the tyres on an experimental feature, you can opt in to the origin trial. But it’s very clear that you wouldn’t want to ship it to production.
That said…
You could ship something that’s behind an origin trial, but you’d have to make sure you’re putting safeguards in place. At the very least, you’d need to do feature detection. You certainly couldn’t use an experimental feature for anything mission critical …but you could use it as an enhancement.
And that is a pretty great way to think about all web features, experimental or otherwise. Don’t assume the feature will be supported. Use feature detection (or @supports
in the case of CSS). Try to use the feature as an enhancement rather than a dependency.
If you treat all browser features as though they’re behind an origin trial, then suddenly the landscape of browser support becomes more navigable. Instead of looking at the support table for something on caniuse.com and thinking, “I wish more browsers supported this feature so that I could use it!”, you can instead think “I’m going to use this feature today, but treat it as an experimental feature.”
You can also do it for well-established features like querySelector
, addEventListener
, and geolocation
. Instead of assuming that browser support is universal, it doesn’t hurt to take a more defensive approach. Assume nothing. Acknowledge and embrace unpredictability.
The debacle with vendor prefixes shows what happens if we treat experimental features as though they’re stable. So let’s flip that around. Let’s treat stable features as though they’re experimental. If you cultivate that mindset, your websites will be more robust and resilient.
More on battling entropy:
Ever needed to change “just a small thing” on an old page you build years ago? I recently had the pleasure and the simple task of changing some colors in CSS lead to a whole day of me wrangling with old deprecated Grunt tasks and trying to get the build task running.
The solution:
That’s why starting with HTML, CSS and JavaScript without the need to ever compile anything on your local machine is a good idea. Changing some colors on such a page would indeed only take minutes and not a whole day.
I like this mindset:
Be boring by default and enhance on the way.
I like this idea for a minimum viable note-taking app:
data:text/html,<body contenteditable style="line-height:1.5;font-size:20px;">
I have added this to bookmarks and now my zero-weight text editor is one keypress away from me. You might also use it as a temporary clipboard to paste text or even pictures.
See also: a minimum viable code editor.
To be blunt, I feel we, the folks who have been involved with designing and developing for the web for a significant period of time–including me as I feel a strong sense of personal responsibility here–are in no small part responsible for it falling far short of its promise.
Oh, that looks soooo goooood!
Rachel is doing her dissertation project on the history of web design and development:
I intend this site to become a place to gather the stories of the early efforts to create an open web.
Take the survey to help out!
I can’t remember the last time I saw somebody using a hashtag on Twitter.
It’s like when the bees started disappearing. There’s some kind of hashtag collapse disorder.
I’d maybe simplify this people problem a bit: the codebase is easy to change, but the incentives within a company are not. And yet it’s the incentives that drive what kind of code gets written — what is acceptable, what needs to get fixed, how people work together. In short, we cannot be expected to fix the code without fixing the organization, too.
A timeline showing the history of non-digital dataviz.
Checked in at Baker Street Coffee. Flat whites outdoors — with Jessica
Reading A Paradise Built in Hell: The Extraordinary Communities That Arise in Disaster by Rebecca Solnit.
It’s understandable to think that JavaScript frameworks and their communities are eating the web because places like Twitter are awash with very loud voices from said communities.
Always remember that although a subset of the JavaScript community can be very loud, they represent a paltry portion of the web as a whole.
Take a look at your smartphone and delete all the apps you don’t really need. For many tasks, you can use a browser on your phone instead of an app.
Privacy-wise, browsers are preferable, because they can’t access as much of your information as an app can.
❤️
Cracking open a @Beerleft to toast fifteen years of @Clearleft!
Sounds like you need more roughage in your diet. Or you could try drinking prune juice.
When it finally happens, just imagine how satisfying that blog post is going to be!
This in an intriguing promise (there’s no code yet):
A PWA typically requires writing a service worker, an app manifest and a ton of custom code. Progressier flattens the learning curve. Just add it to your html template — you’re done.
I worry that this one line of code will pull in many, many, many, many lines of JavaScript.
Ambient reassurance is the experience of small, unplanned moments of interaction with colleagues that provide reassurance that you’re on the right track. They provide encouragement and they help us to maintain self belief in those moments where we are liable to lapse into unproductive self doubt or imposter syndrome.
In hindsight I realise, these moments flowed naturally in an office environment.
Cassie pointed me to this very nifty tool (that she plans to use in your SVG animation workshop): choose font from Google Fonts, type some text, and get the glyphs immediately translated into an SVG!
Another five pieces of sweet, sweet low-hanging fruit:
- Always label your inputs.
- Highlight input element on focus.
- Break long forms into smaller sections.
- Provide error messages.
- Avoid horizontal layout forms unless necessary.
Fellow front-end developers, are you using the Web Share API? If so, would you mind taking a moment to briefly document (or link to) how you’re using it here:
https://github.com/adactio/share-button-type/issues/1
Thank you muchly!
I added a long-overdue enhancement to The Session recently. Here’s the scenario…
You’re on a web page with a comment form. You type your well-considered thoughts into a textarea
field. But then something happens. Maybe you accidentally navigate away from the page or maybe your network connection goes down right when you try to submit the form.
This is a textbook case for storing data locally on the user’s device …at least until it has safely been transmitted to the server. So that’s what I set about doing.
My first decision was choosing how to store the data locally. There are multiple APIs available: sessionStorage
, IndexedDB
, localStorage
. It was clear that sessionStorage
wasn’t right for this particular use case: I needed the data to be saved across browser sessions. So it was down to IndexedDB
or localStorage
. IndexedDB
is the more versatile and powerful—because it’s asynchronous—but localStorage
is nice and straightforward so I decided on that. I’m not sure if that was the right decision though.
Alright, so I’m going to store the contents of a form in localStorage
. It accepts key/value pairs. I’ll make the key the current URL. The value will be the contents of that textarea
. I can store other form fields too. Even though localStorage
technically only stores one value, that value can be a JSON object so in reality you can store multiple values with one key (just remember to parse the JSON when you retrieve it).
Now I know what I’m going to store (the textarea
contents) and how I’m going to store it (localStorage
). The next question is when should I do it?
I could play it safe and store the comment whenever the user presses a key within the textarea
. But that seems like overkill. It would be more efficient to only save when the user leaves the current page for any reason.
Alright then, I’ll use the unload
event. No! Bad Jeremy! If I use that then the browser can’t reliably add the current page to the cache it uses for faster back-forwards navigations. The page life cycle is complicated.
So beforeunload
then? Well, maybe. But modern browsers also support a pagehide
event that looks like a better option.
In either case, just adding a listener for the event could screw up the caching of the page for back-forwards navigations. I should only listen for the event if I know that I need to store the contents of the textarea
. And in order to know if the user has interacted with the textarea
, I’m back to listening for key presses again.
But wait a minute! I don’t have to listen for every key press. If the user has typed anything, that’s enough for me. I only need to listen for the first key press in the textarea
.
Handily, addEventListener
accepts an object of options. One of those options is called “once
”. If I set that to true
, then the event listener is only fired once.
So I set up a cascade of event listeners. If the user types anything into the textarea
, that fires an event listener (just once) that then adds the event listener for when the page is unloaded—and that’s when the textarea
contents are put into localStorage
.
I’ve abstracted my code into a gist. Here’s what it does:
localStorage
, bail out.localStorage
key to be the current URL.textarea
with the value in localStorage
.textarea
in localStorage
but don’t call the function yet.textarea
, start listening for the page being unloaded.textarea
in localStorage
.localStorage
for the current URL.That last step isn’t something I’m doing on The Session. Instead I’m relying on getting something back from the server to indicate that the form was successfully submitted. If you can do something like that, I’d recommend that instead of listening to the form submission event. After all, something could still go wrong between the form being submitted and the data being received by the server.
Still, this bit of code is better than nothing. Remember, it’s intended as an enhancement. You should be able to drop it into any project and improve the user experience a little bit. Ideally, no one will ever notice it’s there—it’s the kind of enhancement that only kicks in when something goes wrong. A little smidgen of resilient web design. A defensive enhancement.
My name is Jeremy Keith and I endorse this message:
I love the modern JS platform (the stuff the browser does for you), and hate modern JS tooling.
James made a radio programme about “the cloud”:
It’s the central metaphor of the internet - ethereal and benign, a fluffy icon on screens and smartphones, the digital cloud has become so naturalised in our everyday life we look right through it. But clouds can also obscure and conceal – what is it hiding? Author and technologist James Bridle navigates the history and politics of the cloud, explores the power of its metaphor and guides us back down to earth.
The unfair collusion between Google AMP and Google Search might just bite ‘em on the ass.
Five pieces of low-hanging fruit:
- Unlabelled links and buttons
- No image descriptions
- Poor use of headings
- Inaccessible web forms
- Auto-playing audio and video
I’m just saying what everyone else is thinking, Ethan.
First they tell us to “eat out to help out.” Now they’re asking if we want to cyber.
This government is horny on main is what I’m saying.
From day one, I’ve been a fan of Jay Hoffman’s project The History Of The Web—both the newsletter and the evolving timeline.
Recently Jay started publishing essays on web history over on CSS Tricks:
Round about that time, Chris floated the idea of having people record themselves reading blog posts. I immediately volunteered my services for the web history essays.
So now you can listen to me reading Jay’s words:
Each chapter is round about half an hour long so that’s a solid two hours or so of me yapping.
Should you wish to take the audio with you wherever you go, I’ve made a podcast feed for you. Pop that in your podcatching software of choice. Here it is on Apple Podcasts. Here it is on Spotify.
And if you just can’t get enough of my voice, there’s always the Clearleft podcast …although that’s mostly other people talking, thank goodness.
This post really highlights one of the biggest issues with the convoluted build tools used for “modern” web development. If you return to a project after any length of time, this is what awaits:
I find entropy staring me back in the face: library updates, breaking API changes, refactored mental models, and possible downright obsolescence. An incredible amount of effort will be required to make a simple change, test it, and get it live.
Take a moment and think about this super power: if you write vanilla HTML, CSS, and JS, all you have to do is put that code in a web browser and it runs. Edit a file, refresh the page, you’ve got a feedback cycle. As soon as you introduce tooling, as soon as you introduce an abstraction not native to the browser, you may have to invent the universe for a feedback cycle.
Maintainability matters—if not for you, then for future you.
The more I author code as it will be run by the browser the easier it will be to maintain that code over time, despite its perceived inferior developer ergonomics (remember, developer experience encompasses both the present and the future, i.e. “how simple are the ergonomics to build this now and maintain it into the future?) I don’t mind typing some extra characters now if it means I don’t have to learn/relearn, setup, configure, integrate, update, maintain, and inevitably troubleshoot a build tool or framework later.
Now that I own a company… https://adactio.com/journal/17506 …I’m going to get myself a monocle, a top hat, and a cane.
Clearleft turned fifteen this year. We didn’t make a big deal of it. What with The Situation and all, it didn’t seem fitting to be self-congratulatory. Still, any agency that can survive for a decade and a half deserves some recognition.
Cassie marked the anniversary by designing and building a beautiful timeline of Clearleft’s history.
Here’s a post I wrote 15 years ago:
Most of you probably know this already, but I’ve joined forces with Andy and Richard. Collectively, we are known as Clearleft.
I didn’t make too much of a big deal of it back then. I think I was afraid I’d jinx it. I still kind of feel that way. Fifteen years of success? Beginner’s luck.
Despite being one of the three founders, I was never an owner of Clearleft. I let Andy and Rich take the risks and rewards on their shoulders while I take a salary, the same as any other employee.
But now, after fifteen years, I am also an owner of Clearleft.
So is Trys. And Cassie. And Benjamin. And everyone else at Clearleft.
Clearleft is now owned by an employee ownership trust. This isn’t like owning shares in a company—a common Silicon Valley honeypot. This is literally owning the company. Shares are transferable—this isn’t. As long as I’m an employee at Clearleft, I’m a part owner.
On a day-to-day basis, none of this makes much difference. Everyone continues to do great work, the same as before. The difference is in what happens to any profit produced as a result of that work. The owners decide what to do with that profit. The owners are us.
In most companies you’ve got a tension between a board representing the stakeholders and a union representing the workers. In the case of an employee ownership trust, the interests are one and the same. The stakeholders are the workers.
It’ll be fascinating to see how this plays out. Check back again in fifteen years.
Dim sum, mapo tofu, and dumplings. 🥟
Puntastic!
I don’t know anymore. What are words, even?
Chris shares his thoughts on the ever-widening skillset required of a so-called front-end developer.
Interestingly, the skillset he mentions half way through (which is what front-end devs used to need to know) really appeals to me: accessibility, performance, responsiveness, progressive enhancement. But the list that covers modern front-end dev sounds more like a different mindset entirely: APIs, Content Management Systems, business logic …the back of the front end.
And Chris doesn’t even touch on the build processes that front-end devs are expected to be familiar with: version control, build pipelines, package management, and all that crap.
I wish we could return to this:
The bigger picture is that as long as the job is building websites, front-enders are focused on the browser.
Tess calls for more precise language—like “site” and “origin”—when talking about browsers and resources:
When talking about web features with security or privacy impact, folks often talk about “first parties” and “third parties”. Everyone sort of knows what we mean when we use these terms, but it turns out that we often mean different things, and what we each think these terms mean usually doesn’t map cleanly onto the technical mechanisms browsers actually use to distinguish different actors for security or privacy purposes.
Personally, rather than say “third-party JavaScript”, I prefer the more squirm-inducing and brutually honest phrase “other people’s JavaScript”.
Considering how much accessibility work happens “under the hood”, it’s interesting that all five of these considerations are visibly testable.
- Think about accessible copy
- Don’t forget about the focus indicator
- Check your colour contrast
- Don’t just use colour to convey meaning
- Design in anticipation of text resizing
A follow-up to full-bleed layout post I linked to recently. Here’s how you can get the same effect with using CSS grid.
I like the use of the principle of least power not just in the choice of languages, but within the application of a language.
Playing The Chapel Bell (jig) on mandolin:
It me.
And yet now, in this moment of semi-stillness, the pause button may have slowed down our geographical dashing, but it has only accelerated our inner flounder. The dull thrum of imprecise apprehension. The gratitude for semi-safety made weird by the ever-blooming realisation that there is little to get excited about.
Match up images that have been posted in pairs to Twitter with the caption “same energy”. This is more fun and addictive than it has any right to be.
The “Adjust CSS” slider on this delightful homepage is an effective (and cute) illustration of progressive enhancement in action.
If you’re at all interested in what I wrote about a declarative Web Share API—and its sequel, a polyfill for button type=”share”—then you might be interested in an explainer document I’ve put together.
It’s a useful exercise for me to enumerate the reasoning for button type=“share”
in one place. If you have any feedback, feel free to fork it or create an issue.
The document is based on my initial blog posts and the discussion that followed in this issue on the repo for the Web Share API. In that thread I got some pushback from Marcos. There are three points he makes. I think that two of them lack merit, but the third one is actually spot on.
Here’s the first bit of pushback:
Apart from placing a button in the content, I’m not sure what the proposal offers over what (at least one) browser already provides? For instance, Safari UI already provides a share button by default on every page
But that is addressed in the explainer document for the Web Share API itself:
The browser UI may not always be available, e.g., when a web app has been installed as a standalone/fullscreen app.
That’s exactly what I wanted to address. Browser UI is not always available and as progressive web apps become more popular, authors will need to provide a way for users to share the current URL—something that previously was handled by browsers.
That use-case of sharing the current page leads nicely into the second bit of pushback:
The API is specialized… using it to share the same page is kinda pointless.
But again, the explainer document for the Web Share API directly contradicts this:
Sharing the page’s own URL (a very common case)…
Rather than being a difference of opinion, this is something that could be resolved with data. I’d really like to find out how people are currently using the Web Share API. How much of the current usage falls into the category of “share the current page”? I don’t know the best way to gather this data though. If you have any ideas, let me know. I’ve started an issue where you can share how you’re using the Web Share API. Or if you’re not using the Web Share API, but you know someone who is, please let them know.
Okay, so those first two bits of pushback directly contradict what’s in the explainer document for the Web Share API. The third bit of pushback is more philosophical and, I think, more interesting.
The Web Share API explainer document does a good job of explaining why a declarative solution is desirable:
The link can be placed declaratively on the page, with no need for a JavaScript click event handler.
That’s also my justification for having a declarative alternative: it would be easier for more people to use. I said:
At a fundamental level, declarative technologies have a lower barrier to entry than imperative technologies.
That’s demonstrably false and a common misconception: See OWL, XForms, SVG, or any XML+namespace spec. Even HTML is poorly understood, but it just happens to have extremely robust error recovery (giving the illusion of it being easy). However, that’s not a function of it being “declarative”.
He’s absolutely right.
It’s not so much that I want a declarative option—I want an option that has robust error recovery. After all, XML is a declarative language but its error handling is as strict as an imperative language like JavaScript: make one syntactical error and nothing works. XML has a brittle error-handling model by design. HTML and CSS have extremely robust error recovery by design. It’s that error-handling model that gives HTML and CSS their robustness.
I’ve been using the word “declarative” when I actually meant “robust in handling errors”.
I guess that when I’ve been talking about “a declarative solution”, I’ve been thinking in terms of the three languages parsed by browsers: HTML, CSS, and JavaScript. Two of those languages are declarative, and those two also happen to have much more forgiving error-handling than the third language. That’s the important part—the error handling—not the fact that they’re declarative.
I’ve been using “declarative” as a shorthand for “either HTML or CSS”, but really I should try to be more precise in my language. The word “declarative” covers a wide range of possible languages, and not all of them lower the barrier to entry. A declarative language with a brittle error-handling model is as daunting as an imperative language.
I should try to use a more descriptive word than “declarative” when I’m describing HTML or CSS. Resilient? Robust?
With that in mind, button type=“share”
is worth pursuing. Yes, it’s a declarative option for using the Web Share API, but more important, it’s a robust option for using the Web Share API.
I invite you to read the explainer document for a share button type and I welcome your feedback …especially if you’re currently using the Web Share API!
When you’ve got a single centered column but you want something (like an image) to break out and span the full width.
These survey results show that creating and maintaining an impactful design system comes with challenges such as planning a clear strategy, managing changes to the system, and fostering design system adoption across the organization. Yet the long-lasting value of a mature design system—like collaboration and better communication—awaits after the hard work of overcoming these challenges is done.
Of course Finland exists!… But birds, on the other hand …well, everyone knows that birds aren’t real.
Do the research!!!
Every day I’ve been recording myself playing a tune and then posting the videos here on my site.
It seems like just yesterday that I wrote about hitting the landmark of 100 tunes. But that was itself 100 days ago. I know this because today I posted my 200th tune.
I’m pretty pleased that I’ve managed to keep up a 200 day streak. I could keep going, but I think I’m going to take a break. I’ll keep recording and posting tunes, but I’m no longer going to give myself the deadline of doing it every single day. I’ll record and post a tune when I feel like it.
It’ll be interesting to see how the frequency changes now. Maybe I’ll still feel like recording a tune most days. Or maybe it’ll become a rare occurrence.
If you want to peruse the 200 tunes recorded so far, you can find them here on my website and in a playlist on YouTube. I also posted some videos to Instagram, but I haven’t been doing that from the start.
I’m quite chuffed with the overall output (even if some of the individual recordings are distinctly sub-par). Recording 200 tunes sounds like a big task by itself, but if you break it down to recording just one tune a day, it becomes so much more manageable. You can stand anything for ten seconds. As I said when I reached the 100 tune mark:
Recording one tune isn’t too much hassle. There are days when it’s frustrating and I have to do multiple takes, but overall it’s not too taxing. But now, when I look at the cumulative result, I’m very happy that I didn’t skip any days.
There was a side effect to recording a short video every day. I created a timeline for my hair. I’ve documented the day-by-day growth of my hair from 200 days ago to today. A self has been inadvertently quantified.
A great talk by Ethan called The Design Systems Between Us.
Playing The Foxhunters (reel) on mandolin:
Playing The Kid On The Mountain (slip jig) on mandolin:
Chris has some kind words to say about the Clearleft podcast:
It’s really well-edited, pulling in clips from relevant talks and such. A cut above the hit-record-hit-stop ‘n’ polish podcasts that I typically do.
Do you mean copy to clipboard? It doesn’t look like that’s an option on MacOS no matter what fields you supply.
Playing Dinkey’s (reel) on mandolin:
AS A user, I WANT TO take it to the bridge SO THAT I CAN get down and do my thang.
If you’re using web fonts, there are good performance (and privacy) reasons for hosting your own font files. And fortunately, Google Fonts gives you that option. There’s a “Download family” button on every specimen page.
But if you go ahead and download a font family from Google Fonts, you’ll notice something a bit odd. The .zip file only contains .ttf files. You can serve those on the web, but it’s far from the best choice. Woff2 is far leaner in file size.
This means you need to manually convert the downloaded .ttf files into .woff or .woff2 files using something like Font Squirrel’s generator. That’s fine, but I’m curious as to why this step is necessary. Why doesn’t Google Fonts provide .woff or .woff2 files in the downloaded folder? After all, if you choose to use Google Fonts as a third-party hosting service for your fonts, it most definitely serves up the appropriate file formats.
I thought maybe it was something to do with the licensing. Maybe some licenses only allow for unmodified truetype files to be distributed? But I’ve looked at fonts with different licenses—some have Apache 2 licensing, some have Open Font licensing—and they’re all quite permissive and definitely allow for modification.
Maybe the thinking is that, if you’re hosting your own font files, then you know what you’re doing and you should be able to do your own file conversion and subsetting. But I’ve come across more than one website in the wild serving up .ttf files. And who can blame them? They want to host their own font files. They downloaded those files from Google Fonts. Why shouldn’t they assume that they’re good to go?
It’s all a bit strange. If anyone knows why Google Fonts only provides .ttf files for download, please let me know. In a pinch, I will also accept rampant speculation.
Trys also pointed out some weird default behaviour if you do let Google Fonts do the hosting for you. Specifically if it’s a variable font. Let’s say it’s a font with weight as a variable axis. You specify in advance which weights you’ll be using, and then it generates separate font files to serve for each different weight.
Doesn’t that defeat the whole point of using a variable font? I mean, I can see how it could result in smaller file sizes if you’re just using one or two weights, but isn’t half the fun of having a weight axis that you can go crazy with as many weights as you want and it’s all still one font file?
Like I said, it’s all very strange.
Did you know there’s an imagesrcset
attribute you can put on link rel="preload" as="image"
(along with an imagesizes
attribute)?
I didn’t. (Until Amber pointed this out.)
This is a superb twenty minute presentation by Trys! It’s got everything: a great narrative, technical know-how, and a slick presentation style.
Conference organisers: you should get Trys to speak at your event!
If you’ve been following my recent blog posts about a declarative option for the Web Share API, you might be interested in this explainer document I’ve put together. It outlines the use case for button type="share"
.
Trys has been investigating how to incorporate CSS clamp()
into the brilliant Utopia project. I won’t pretend to understand all the maths here—this is a very deep dive!
He’s also created a CSS generator Mark 2 if you want to use clamp()
in your fluid type.
Employing the principle of least power for better digital preservation:
New frameworks and technologies spring up to try and cope with the speed of change. More and more ways to build and release things faster and cheaper becomes the norm. And, the more this happens, the more we deviate from standards: good ol’ HTML and CSS.
I’d like to see more of this thinking – maybe we could call it the future owners test – in contemporary responsible tech work. We mustn’t get so wrapped up in today that we overlook tomorrow.
I’ll be moderating this online panel next week with Emma Boulton, Holly Habstritt Gaal, Jean Laleuf, and Lola Oyelayo-Pearson.
There are still some spots available—it’s free to register. The discussion won’t be made public; the Chatham House Rule applies.
I’m looking forward to it! Come along if you’re interested in the future of design teams.
What will the near-future look like for design teams? Join us as we explore how processes, team structures and culture might change as our industry matures and grows.
Really? A dystopian future where the survival of democracy and civilisation itself depends on maintaining the postal serv… Oh. Wait.
H1: 18px H2: 14px H3: 14px H4: 12px H5: 12px H6: 12px
Explanation here: https://worldwideweb.cern.ch/code/