A terrific explanation of the
aria-live attribute from Ire. If you’re doing anything with Ajax, this is vital knowledge.
A terrific explanation of the
A thorough explanation of the history and inner workings of Cross-Origin Resource Sharing.
Like tales of a mythical sea beast, every developer has a story to tell about the day CORS seized upon one of their web requests, dragging it down into the inexorable depths, never to be seen again.
This is a really good use-case for cancelling fetch requests: making API calls while autocompleting in search.
This is definitely the best review of any of my books.
Usability Testing of Inline Form Validation: 40% Don’t Have It, 20% Get It Wrong - Articles - Baymard Institute
I saw Christian speak on this topic at Smashing Conference in Barcelona. Here, he takes a long hard look at some of the little things that sites get wrong when doing validating forms on the fly. It’s all good sensible stuff, although it sounds a bit medical when he takes about “Premature Inline Validation.”
This is a truly fantastic example of progressive enhancement applied to a form.
What I love about this is that it shows how progressive enhancement isn’t a binary on/off choice: there are layers and layers of enhancements here, from simple inline validation all the way to service workers and background sync, with many options in between.
Turbolinks intercepts all clicks on
a hreflinks to the same domain. When you click an eligible link, Turbolinks prevents the browser from following it. Instead, Turbolinks changes the browser’s URL using the History API, requests the new page using
XMLHttpRequest, and then renders the HTML response.
During rendering, Turbolinks replaces the current
bodyelement outright and merges the contents of the
documentobjects, and the HTML
htmlelement, persist from one rendering to the next.
Here’s the mustard it’s cutting:
It depends on the HTML5 History API and Window.requestAnimationFrame. In unsupported browsers, Turbolinks gracefully degrades to standard navigation.
This approach matches my own mental model for building on the web—I might try playing around with this on some of my projects.
It’s official: hash bang URLs are an anti-pattern, and if you want your content indexed by Google, use progressive enhancement:
Since the assumptions for our 2009 proposal are no longer valid, we recommend following the principles of progressive enhancement.
Sensible words from Christian.
Web applications don’t follow new rules.
And frameworks will not help:
A lot of them are not really fixing fundamental problems of the web. What they do is add developer convenience. … This would be totally OK, if we were honest about it.
A terrific case study in progressive enhancement: starting with a good ol’ form that works for everybody and then adding on features like Ajax, SVG, the History API …the sky’s the limit.
A really nice explanation by Todd Kloots of Twitter’s use of progressive enhancement with Ajax and the HTML5 History API. There’s even a shout for Hijax in there.
A great step-by-step tutorial from Brad on developing a responsive site with a Content First mindset.
Scott walks through the code and thinking behind the conditional loading pattern on The Boston Globe site. This is such a useful and valuable pattern!
I really like the thinking that’s gone into the design of Github, as shown in this presentation. It’s not really about responsive design as we commonly know it, but boy, is it a great deep dive into the importance of URLs and performance.
A superb post by Dan on the bigger picture of what’s wrong with hashbang URLs. Well written and well reasoned.
Tim Bray calmly explains why hash-bang URLs are a very bad idea.
This is what we call “tight coupling” and I thought that anyone with a Computer Science degree ought to have been taught to avoid it.
So why use a hash-bang if it’s an artificial URL, and a URL that needs to be reformatted before it points to a proper URL that actually returns content?
Out of all the reasons, the strongest one is “Because it’s cool”. I said strongest not strong.