Tags: resilience

87

sparkline

Monday, October 19th, 2020

Continuous partial browser support

Vendor prefixes didn’t work. The theory was sound. It was a way of marking CSS and JavaScript features as being experimental. Developers could use the prefixed properties as long as they understood that those features weren’t to be relied upon.

That’s not what happened though. Developers used vendor-prefixed properties as though they were stable. Tutorials were published that basically said “Go ahead and use these vendor-prefixed properties and ship it!” There were even tools that would add the prefixes for you so you didn’t have to type them out for yourself.

Browsers weren’t completely blameless either. Long after features were standardised, they would only be supported in their prefixed form. Apple was and is the worst for this. To this day, if you want to use the clip-path property in your CSS, you’ll need to duplicate your declaration with -webkit-clip-path if you want to support Safari. It’s been like that for seven years and counting.

Like capitalism, vendor prefixes were one of those ideas that sounded great in theory but ended up being unworkable in practice.

Still, developers need some way to get their hands on experiment features. But we don’t want browsers to ship experimental features without some kind of safety mechanism.

The current thinking involves something called origin trials. Here’s the explainer from Microsoft Edge and here’s Google Chrome’s explainer:

  • Developers are able to register for an experimental feature to be enabled on their origin for a fixed period of time measured in months. In exchange, they provide us their email address and agree to give feedback once the experiment ends.
  • Usage of these experiments is constrained to remain below Chrome’s deprecation threshold (< 0.5% of all Chrome page loads) by a system which automatically disables the experiment on all origins if this threshold is exceeded.

I think it works pretty well. If you’re really interested in kicking the tyres on an experimental feature, you can opt in to the origin trial. But it’s very clear that you wouldn’t want to ship it to production.

That said…

You could ship something that’s behind an origin trial, but you’d have to make sure you’re putting safeguards in place. At the very least, you’d need to do feature detection. You certainly couldn’t use an experimental feature for anything mission critical …but you could use it as an enhancement.

And that is a pretty great way to think about all web features, experimental or otherwise. Don’t assume the feature will be supported. Use feature detection (or @supports in the case of CSS). Try to use the feature as an enhancement rather than a dependency.

If you treat all browser features as though they’re behind an origin trial, then suddenly the landscape of browser support becomes more navigable. Instead of looking at the support table for something on caniuse.com and thinking, “I wish more browsers supported this feature so that I could use it!”, you can instead think “I’m going to use this feature today, but treat it as an experimental feature.”

You can also do it for well-established features like querySelector, addEventListener, and geolocation. Instead of assuming that browser support is universal, it doesn’t hurt to take a more defensive approach. Assume nothing. Acknowledge and embrace unpredictability.

The debacle with vendor prefixes shows what happens if we treat experimental features as though they’re stable. So let’s flip that around. Let’s treat stable features as though they’re experimental. If you cultivate that mindset, your websites will be more robust and resilient.

Wednesday, October 14th, 2020

Saving forms

I added a long-overdue enhancement to The Session recently. Here’s the scenario…

You’re on a web page with a comment form. You type your well-considered thoughts into a textarea field. But then something happens. Maybe you accidentally navigate away from the page or maybe your network connection goes down right when you try to submit the form.

This is a textbook case for storing data locally on the user’s device …at least until it has safely been transmitted to the server. So that’s what I set about doing.

My first decision was choosing how to store the data locally. There are multiple APIs available: sessionStorage, IndexedDB, localStorage. It was clear that sessionStorage wasn’t right for this particular use case: I needed the data to be saved across browser sessions. So it was down to IndexedDB or localStorage. IndexedDB is the more versatile and powerful—because it’s asynchronous—but localStorage is nice and straightforward so I decided on that. I’m not sure if that was the right decision though.

Alright, so I’m going to store the contents of a form in localStorage. It accepts key/value pairs. I’ll make the key the current URL. The value will be the contents of that textarea. I can store other form fields too. Even though localStorage technically only stores one value, that value can be a JSON object so in reality you can store multiple values with one key (just remember to parse the JSON when you retrieve it).

Now I know what I’m going to store (the textarea contents) and how I’m going to store it (localStorage). The next question is when should I do it?

I could play it safe and store the comment whenever the user presses a key within the textarea. But that seems like overkill. It would be more efficient to only save when the user leaves the current page for any reason.

Alright then, I’ll use the unload event. No! Bad Jeremy! If I use that then the browser can’t reliably add the current page to the cache it uses for faster back-forwards navigations. The page life cycle is complicated.

So beforeunload then? Well, maybe. But modern browsers also support a pagehide event that looks like a better option.

In either case, just adding a listener for the event could screw up the caching of the page for back-forwards navigations. I should only listen for the event if I know that I need to store the contents of the textarea. And in order to know if the user has interacted with the textarea, I’m back to listening for key presses again.

But wait a minute! I don’t have to listen for every key press. If the user has typed anything, that’s enough for me. I only need to listen for the first key press in the textarea.

Handily, addEventListener accepts an object of options. One of those options is called “once”. If I set that to true, then the event listener is only fired once.

So I set up a cascade of event listeners. If the user types anything into the textarea, that fires an event listener (just once) that then adds the event listener for when the page is unloaded—and that’s when the textarea contents are put into localStorage.

I’ve abstracted my code into a gist. Here’s what it does:

  1. Cut the mustard. If this browser doesn’t support localStorage, bail out.
  2. Set the localStorage key to be the current URL.
  3. If there’s already an entry for the current URL, update the textarea with the value in localStorage.
  4. Write a function to store the contents of the textarea in localStorage but don’t call the function yet.
  5. The first time that a key is pressed inside the textarea, start listening for the page being unloaded.
  6. When the page is being unloaded, invoke that function that stores the contents of the textarea in localStorage.
  7. When the form is submitted, remove the entry in localStorage for the current URL.

That last step isn’t something I’m doing on The Session. Instead I’m relying on getting something back from the server to indicate that the form was successfully submitted. If you can do something like that, I’d recommend that instead of listening to the form submission event. After all, something could still go wrong between the form being submitted and the data being received by the server.

Still, this bit of code is better than nothing. Remember, it’s intended as an enhancement. You should be able to drop it into any project and improve the user experience a little bit. Ideally, no one will ever notice it’s there—it’s the kind of enhancement that only kicks in when something goes wrong. A little smidgen of resilient web design. A defensive enhancement.

Monday, October 5th, 2020

The reason for a share button type

If you’re at all interested in what I wrote about a declarative Web Share API—and its sequel, a polyfill for button type=”share”—then you might be interested in an explainer document I’ve put together.

It’s a useful exercise for me to enumerate the reasoning for button type=“share” in one place. If you have any feedback, feel free to fork it or create an issue.

The document is based on my initial blog posts and the discussion that followed in this issue on the repo for the Web Share API. In that thread I got some pushback from Marcos. There are three points he makes. I think that two of them lack merit, but the third one is actually spot on.

Here’s the first bit of pushback:

Apart from placing a button in the content, I’m not sure what the proposal offers over what (at least one) browser already provides? For instance, Safari UI already provides a share button by default on every page

But that is addressed in the explainer document for the Web Share API itself:

The browser UI may not always be available, e.g., when a web app has been installed as a standalone/fullscreen app.

That’s exactly what I wanted to address. Browser UI is not always available and as progressive web apps become more popular, authors will need to provide a way for users to share the current URL—something that previously was handled by browsers.

That use-case of sharing the current page leads nicely into the second bit of pushback:

The API is specialized… using it to share the same page is kinda pointless.

But again, the explainer document for the Web Share API directly contradicts this:

Sharing the page’s own URL (a very common case)…

Rather than being a difference of opinion, this is something that could be resolved with data. I’d really like to find out how people are currently using the Web Share API. How much of the current usage falls into the category of “share the current page”? I don’t know the best way to gather this data though. If you have any ideas, let me know. I’ve started an issue where you can share how you’re using the Web Share API. Or if you’re not using the Web Share API, but you know someone who is, please let them know.

Okay, so those first two bits of pushback directly contradict what’s in the explainer document for the Web Share API. The third bit of pushback is more philosophical and, I think, more interesting.

The Web Share API explainer document does a good job of explaining why a declarative solution is desirable:

The link can be placed declaratively on the page, with no need for a JavaScript click event handler.

That’s also my justification for having a declarative alternative: it would be easier for more people to use. I said:

At a fundamental level, declarative technologies have a lower barrier to entry than imperative technologies.

But Marcos wrote:

That’s demonstrably false and a common misconception: See OWL, XForms, SVG, or any XML+namespace spec. Even HTML is poorly understood, but it just happens to have extremely robust error recovery (giving the illusion of it being easy). However, that’s not a function of it being “declarative”.

He’s absolutely right.

It’s not so much that I want a declarative option—I want an option that has robust error recovery. After all, XML is a declarative language but its error handling is as strict as an imperative language like JavaScript: make one syntactical error and nothing works. XML has a brittle error-handling model by design. HTML and CSS have extremely robust error recovery by design. It’s that error-handling model that gives HTML and CSS their robustness.

I’ve been using the word “declarative” when I actually meant “robust in handling errors”.

I guess that when I’ve been talking about “a declarative solution”, I’ve been thinking in terms of the three languages parsed by browsers: HTML, CSS, and JavaScript. Two of those languages are declarative, and those two also happen to have much more forgiving error-handling than the third language. That’s the important part—the error handling—not the fact that they’re declarative.

I’ve been using “declarative” as a shorthand for “either HTML or CSS”, but really I should try to be more precise in my language. The word “declarative” covers a wide range of possible languages, and not all of them lower the barrier to entry. A declarative language with a brittle error-handling model is as daunting as an imperative language.

I should try to use a more descriptive word than “declarative” when I’m describing HTML or CSS. Resilient? Robust?

With that in mind, button type=“share” is worth pursuing. Yes, it’s a declarative option for using the Web Share API, but more important, it’s a robust option for using the Web Share API.

I invite you to read the explainer document for a share button type and I welcome your feedback …especially if you’re currently using the Web Share API!

Tuesday, September 29th, 2020

Building a client side proxy

This is a great way to use a service worker to circumvent censorship:

After the visitor opens the website once over a VPN, the service worker is downloaded and installed. The VPN can then be disabled, and the service worker will take over to request content from non-blocked servers, effectively acting as a proxy.

Monday, September 28th, 2020

Chris Ferdinandi: The Lean Web | July 2020 - YouTube

A great presentation on taking a sensible approach to web development. Great advice, as always, from the blogging machine that is Chris Ferdinandi.

The web is a bloated, over-engineered mess. And, according to developer and educator Chris Ferdinandi, many of our modern “best practices” are actually making the web worse. In this talk, Chris explores The Lean Web, a set of principles for a simpler, faster world-wide web.

Chris Ferdinandi: The Lean Web | July 2020

Monday, September 21st, 2020

The web we left behind by Kyle Jacobson (ESNEXT 2020) - YouTube

This is such an excellent presentation! It really resonates with me.

Kyle Jacobson is a developer who’s been working with the web for over 10 years, and he talks about lessons from the past that can make the future of the web not only easier to develop using battle-tested technologies, but also one friendlier for humans.

The web we left behind by Kyle Jacobson (ESNEXT 2020)

Friday, September 18th, 2020

Built to Last

Don’t blame it on the COBOL:

It’s a common fiction that computing technologies tend to become obsolete in a matter of years or even months, because this sells more units of consumer electronics. But this has never been true when it comes to large-scale computing infrastructure. This misapprehension, and the language’s history of being disdained by an increasingly toxic programming culture, made COBOL an easy scapegoat. But the narrative that COBOL was to blame for recent failures undoes itself: scapegoating COBOL can’t get far when the code is in fact meant to be easy to read and maintain.

It strikes me that the resilience of programmes written in COBOL is like the opposite of today’s modern web stack, where the tangled weeds of nested dependencies ensures that projects get harder and harder to maintain over time.

In a field that has elevated boy geniuses and rockstar coders, obscure hacks and complex black-boxed algorithms, it’s perhaps no wonder that a committee-designed language meant to be easier to learn and use—and which was created by a team that included multiple women in positions of authority—would be held in low esteem. But modern computing has started to become undone, and to undo other parts of our societies, through the field’s high opinion of itself, and through the way that it concentrates power into the hands of programmers who mistake social, political, and economic problems for technical ones, often with disastrous results.

Monday, September 14th, 2020

Tolerance | Trys Mudford

Trys ponders home repair projects and Postel’s Law.

As we build our pages, components, and business logic, establish where tolerance should be granted. Consider how flexible each entity should be, and on what axis. Determine which items need to be fixed and less tolerant. There will be areas where the data or presentation being accurate is more important than being flexible - document these decisions.

Tuesday, August 18th, 2020

The Just in Case Mindset in CSS

Examples of defensive coding for CSS. This is an excellent mindset to cultivate!

Wednesday, August 5th, 2020

The Resiliency of the Internet | Jim Nielsen’s Weblog

An ode to the network architecture of the internet:

I believe the DNA of resiliency built into the network manifests itself in the building blocks of what’s transmitted over the network. The next time somebody calls HTML or CSS dumb, think about that line again:

That simplicity, almost an intentional brainlessness…is a key to its adaptability.

It’s not a bug. It’s a feature.

Yes! I wish more web developers would take cues from the very medium they’re building atop of.

Friday, July 31st, 2020

Smashing Podcast Episode 21 With Chris Ferdinandi: Are Modern Best Practices Bad For The Web? — Smashing Magazine

I really enjoyed this interview between Drew and Chris. I love that there’s a transcript so you can read the whole thing if you don’t feel like huffduffing it.

Thursday, July 30th, 2020

Lateral Thinking With Withered Technology · Matthias Ott – User Experience Designer

What web development can learn from the Nintendo Game and Watch.

The Web now consists of an ever-growing number of different frameworks, methodologies, screen sizes, devices, browsers, and connection speeds. “Lateral thinking with withered technology” – progressively enhanced – might actually be an ideal philosophy for building accessible, performant, resilient, and original experiences for a wide audience of users on the Web.

Friday, July 24th, 2020

Make me think! – Ralph Ammer

This is about seamful design.

We need to know things better if we want to be better.

It’s also about progressive enhancement.

Highly sophisticated systems work flawlessly, as long as things go as expected.

When a problem occurs which hasn’t been anticipated by the designers, those systems are prone to fail. The more complex the systems are, the higher are the chances that things go wrong. They are less resilient.

Progressive · Matthias Ott – User Experience Designer

Progressive enhancement is not yet another technology or passing fad. It is a lasting strategy, a principle, to deal with complexity because it lets you build inclusive, resilient experiences that work across different contexts and that will continue to work, once the next fancy JavaScript framework enters the scene – and vanishes again.

But why don’t more people practice progressive enhancement? Is it only because they don’t know better? This might, in fact, be the primary reason. On top of that, especially many JavaScript developers seem to believe that it is not possible or necessary to build modern websites and applications that way.

A heartfelt look at progressive enhancement:

Some look at progressive enhancement like a thing from the past of which the old guard just can’t let go. But to me, progressive enhancement is the future of the Web. It is the basis for building resilient, performant, interoperable, secure, usable, accessible, and thus inclusive experiences. Not only for the Web of today but for the ever-growing complexity of an ever-changing and ever-evolving Web.

Saturday, July 18th, 2020

An Introduction To Stimulus.js — Smashing Magazine

An intro to Stimulus, the lightweight JavaScript library from Basecamp that takes a progressive enhancement approach, as seen with HEY.

One aspect I really like about the approach Stimulus encourages, is I can focus on sending HTML down the wire to my users, which is then jazzed up a little with JavaScript.

I’ve always been a fan of using the first few milliseconds of a user’s attention getting what I have to share with them — in front of them. Then worrying setting up the interaction layer while the user can start processing what they’re seeing.

Furthermore, if the JavaScript were to fail for whatever reason, the user can still see the content and interact with it without JavaScript.

Thursday, July 16th, 2020

Hey now

Progressive enhancement is at the heart of everything I do on the web. It’s the bedrock of my speaking and writing too. Whether I’m writing about JavaScript, Ajax, HTML, or service workers, it’s always through the lens of progressive enhancement. Sometimes I explicitly bang the drum, like with Resilient Web Design. Other times I don’t mention it by name at all, and instead talk only about its benefits.

I sometimes get asked to name some examples of sites that still offer their core functionality even when JavaScript fails. I usually mention Amazon.com, although that has other issues. But quite often I find that a lot of the examples I might mention are dismissed as not being “web apps” (whatever that means).

The pushback I get usually takes the form of “Well, that approach is fine for websites, but it wouldn’t work something like Gmail.”

It’s always Gmail. Which is odd. Because if you really wanted to flummox me with a product or service that defies progressive enhancement, I’d have a hard time with something like, say, a game (although it would be pretty cool to build a text adventure that’s progressively enhanced into a first-person shooter). But an email client? That would work.

Identify core functionality.

Read emails. Write emails.

Make that functionality available using the simplest possible technology.

HTML for showing a list of emails, HTML for displaying the contents of the HTML, HTML for the form you write the response in.

Enhance!

Now add all the enhancements that improve the experience—keyboard shortcuts; Ajax instead of full-page refreshes; local storage, all that stuff.

Can you build something that works just like Gmail without using any JavaScript? No. But that’s not what progressive enhancement is about. It’s about providing the core functionality (reading and writing emails) with the simplest possible technology (HTML) and then enhancing using more powerful technologies (like JavaScript).

Progressive enhancement isn’t about making a choice between using simpler more robust technologies or using more advanced features; it’s about using simpler more robust technologies and then using more advanced features. Have your cake and eat it.

Fortunately I no longer need to run this thought experiment to imagine what it would be like if something like Gmail were built with a progressive enhancement approach. That’s what HEY is.

Sam Stephenson describes the approach they took:

HEY’s UI is 100% HTML over the wire. We render plain-old HTML pages on the server and send them to your browser encoded as text/html. No JSON APIs, no GraphQL, no React—just form submissions and links.

If you think that sounds like the web of 25 years ago, you’re right! Except the HEY front-end stack progressively enhances the “classic web” to work like the “2020 web,” with all the fidelity you’d expect from a well-built SPA.

See? It’s not either resilient or modern—it’s resilient and modern. Have your cake and eat it.

And yet this supremely sensible approach is not considered “modern” web development:

The architecture astronauts who, for the past decade, have been selling us on the necessity of React, Redux, and megabytes of JS, cannot comprehend the possibility of building an email app in 2020 with server-rendered HTML.

HEY isn’t perfect by any means—they’ve got a lot of work to do on their accessibility. But it’s good to have a nice short answer to the question “But what about something like Gmail?”

It reminds me of responsive web design:

When Ethan Marcotte demonstrated the power of responsive design, it was met with resistance. “Sure, a responsive design might work for a simple personal site but there’s no way it could scale to a large complex project.”

Then the Boston Globe launched its responsive site. Microsoft made their homepage responsive. The floodgates opened again.

It’s a similar story today. “Sure, progressive enhancement might work for a simple personal site, but there’s no way it could scale to a large complex project.”

The floodgates are ready to open. We just need you to create the poster child for resilient web design.

It looks like HEY might be that poster child.

I have to wonder if its coincidence or connected that this is a service that’s also tackling ethical issues like tracking? Their focus is very much on people above technology. They’ve taken a human-centric approach to their product and a human-centric approach to web development …because ultimately, that’s what progressive enhancement is.

Thursday, June 25th, 2020

On dependency | RobWeychert.com V7

I’m very selective about how I depend on other people’s work in my personal projects. Here are the factors I consider when evaluating dependencies.

  • Complexity How complex is it, who absorbs the cost of that complexity, and is that acceptable?
  • Comprehensibility Do I understand how it works, and if not, does that matter?
  • Reliability How consistently and for how long can I expect it to work?

I really like Rob’s approach to choosing a particular kind of dependency when working on the web:

When I’m making things, that’s how I prefer to depend on others and have them depend on me: by sharing strong, simple ideas as a collective, and recombining them in novel ways with rigorous specificity as individuals.

Monday, June 22nd, 2020

Always bet on HTML | Go Make Things

I teach JS for a living. I’m obviously not saying “never use of JS” or “JavaScript has no place on the web.” Hell, their are even times where building a JS-first app makes sense.

But if I were going to bet on a web technology, it’s HTML. Always bet on HTML.

Monday, May 18th, 2020

Hard to break

I keep thinking about some feedback that Cassie received recently.

She had delivered the front-end code for a project at Clearleft, and—this being Cassie we’re talking about—the code was rock solid. The client’s Quality Assurance team came back with the verdict that it was “hard to break.”

Hard to break. I love that. That might be the best summation I’ve heard for describing resilience on the web.

If there’s a corollary to resilient web design, it would be brittle web design. In a piece completely unrelated to web development, Jamais Cascio describes brittle systems:

When something is brittle, it’s susceptible to sudden and catastrophic failure.

That sounds like an inarguably bad thing. So why would anyone end up building something in a brittle way? Jamais Cascio continues:

Things that are brittle look strong, may even be strong, until they hit a breaking point, then everything falls apart.

Ah, there’s the rub! It’s not that brittle sites don’t work. They work just fine …until they don’t.

Brittle systems are solid until they’re not. Brittleness is illusory strength. Things that are brittle are non-resilient, sometimes even anti-resilient — they can make resilience more difficult.

Kilian Valkhof makes the same point when it comes to front-end development. For many, accessibility is an unknown unknown:

When you start out it’s you, notepad and a browser against the world. You open up that notepad, and you type

<div onclick="alert('hello world');">Click me!</div>

You fire up your browser, you click your div and …it works! It just works! Awesome. You open up the devtools. No errors. Well done! Clearly you did a good job. On to the next thing.

At the surface level, there’s no discernable difference between a resilient solution and a brittle one:

For all sorts of reasons, both legitimate and, as always, weird browser legacy reasons, a clickable div will mostly work. Well enough to fool someone starting out anyway.

If everything works, how would they know it kinda doesn’t?

Killian goes on to suggest ways to try to make this kind of hidden brittleness more visible.

Furthermore we could envision a browser that is much stricter when developing.

This something I touched on when I was talking about web performance with Gerry on his podcast:

There’s a disconnect in the process we go through when we’re making something, and then how that thing is experienced when it’s actually on the web, which is dependent on network speeds and processing speeds and stuff.

I spend a lot of time wondering why so many websites are badly built. Sure, there’s a lot can be explained by misaligned priorities. And it could just be an expression of Sturgeon’s Law—90% of websites are crap because 90% of everything is crap. But I’ve also come to realise that even though resilience is the antithesis to brittleness, they both share something in common: they’re invisible.

We have a natural bias towards what’s visible. Being committed to making sure something is beautiful to behold is, in some ways, the easy path to travel. But being committed to making sure something is also hard to break? That takes real dedication.

Thursday, April 30th, 2020

‘The stakes feel higher but, with good practice, it need not be scary’ – NHS.UK design lead on responding to coronavirus | PublicTechnology.net

This isn’t the time to get precious about your favourite design and development tools. Use progressive enhancement as your philosophy. Your service might have to be accessed on old devices in hospitals with outdated tech or unsupported operating systems. HTML+CSS is your best bet to ensure that the service can be accessed in unlikely scenarios you’ve never even considered. Do you want to take that risk at a time like this? Nope, me neither.

Save the React squabbles for another time. Make it accessible and robust from day one.