This is a great piece! It starts with a look back at some of the great minds of the nineteenth century: Herschel, Darwin, Babbage and Lovelace. Then it brings us, via JCR Licklider, to the present state of the web before looking ahead to what the future might bring.
So what will the life of an interface designer be like in the year 2120? or 2121 even? A nice round 300 years after Babbage first had the idea of calculations being executed by steam.
I think there are some missteps along the way (I certainly don’t think that inline styles—AKA CSS in JS—are necessarily a move forwards) but I love the idea of applying chaos engineering to web design:
Think of every characteristic of an interface you depend on to not ‘fail’ for your design to ‘work.’ Now imagine if these services were randomly ‘failing’ constantly during your design process. How might we design differently? How would our workflows and priorities change?
But there’s a difference between something degrading gracefully (the result) and graceful degradation (the approach).
I think the situation that Remy outlines here is quite common (in client-rehydrated server-rendered pages), but what’s less common is Remy’s questioning and iteration.
So I now have a simple rule of thumb: if there’s an onClick, there’s got to be an anchor around the component.
53% of mobile visits leave a page that takes longer than 3 seconds to load. That means that a large number of visitors probably abandoned these sites because they were staring at a blank screen for 3 seconds, said “fuck it,” and left approximately half way before the page showed up. The fact that the next page interaction would have been quicker—assuming all the JS files even downloaded correctly in the first attempt—doesn’t amount to much if they didn’t stick around for the first page to load. What was gained by putting the business logic in the front end in this scenario?
A good talk from from Chris Ferdinandi, who says:
Tales of over-engineering, as experienced by Bridget. This resonates with me, and I think she’s right when she says that these things go in cycles. The pendulum always ends up swinging the other way eventually.
Here’s a thorough blow-by-blow account of the workshop I ran in Nottingham last week:
Jeremy’s workshop was a fascinating insight into resilience and how to approach a web project with ubiquity and consistency in mind from both a design and development point of view.
A really terrific piece from Garrett on the nature of the web:
Markup written almost 30 years ago runs exactly the same today as it did then without a single modification. At the same time, the platform has expanded to accommodate countless enhancements. And you don’t need a degree in computer science to understand or use the vast majority of it. Moreover, a well-constructed web page today would still be accessible on any browser ever made. Much of the newer functionality wouldn’t be supported, but the content would be accessible.
I share his concerns about the maintainability overhead introduced by new tools and frameworks:
I’d argue that for every hour these new technologies have saved me, they’ve cost me another in troubleshooting or upgrading the tool due to a web of invisible dependencies.
These are good challenges to think about. Almost all of them are user-focused, and there’s a refreshing focus away from reaching for a library:
It’s tempting to read about these problems with a particular view library or a data fetching library in mind as a solution. But I encourage you to pretend that these libraries don’t exist, and read again from that perspective. How would you approach solving these issues?
When a storm comes, some of the big news sites like CNN and NPR strip down to a zippy performant text-only version that delivers the content without the bells and whistles.
I’d argue though that in some aspects, they are actually better than the original.
The “full” NPR site in comparison takes ~114 requests and weighs close to 3MB on average. Time to first paint is around 20 seconds on slow connections. It includes ads, analytics, tracking scripts and social media widgets.
Meanwhile, the actual news content is roughly the same.
I quite like the idea of storm-driven development.
I love this deep dive that Sara takes into the question of marking up content for progressive disclosure. It reminds me Dan’s SimpleQuiz from back in the day.
Then there’s this gem, which I think is a terrificly succinct explanation of the importance of meaningful markup:
It’s always necessary, in my opinion, to consider what content would render and look like in foreign environments, or in environments that are not controlled by our own styles and scripts. Writing semantic HTML is the first step in achieving truly resilient Web sites and applications.
Oh, this is magnificent! A rallying call for everyone designing and developing on the web to avoid making any assumptions about the people we’re building for:
People will use your site how they want, and according to their means. That is wonderful, and why the Web was built.
I would even say that the % of people viewing your site the way you do rapidly approaches zilch.
Léonie makes a really good point here: if you’re adding
Ultimately you can’t control when and how things go wrong but you do have some control over what happens. This is why progressive enhancement exists.
This is really good breakdown of what’s different about CSS (compared to other languages).
These differences may feel foreign, but it’s these differences that make CSS so powerful. And it’s my suspicion that developers who embrace these things, and have fully internalized them, tend to be far more proficient in CSS.
A good core experience is indicative of a well-structured web page, which, in turn, is usually a good sign for SEO and for accessibility. It’s usually a well designed web page, as the designer and developer have spent time and effort thinking about what’s truly core to the experience. Progressive enhancement means more robust experiences, with fewer bugs in production and fewer individual browser quirks, because we’re letting the platform do the job rather than trying to write it all from scratch.
Aaron gives a timely run-down of all the parts of a web experience that are out of our control. But don’t despair…
Recognizing all of the ways our carefully-crafted experiences can be rendered unusable can be more than a little disheartening. No one likes to spend their time thinking about failure. So don’t. Don’t focus on all of the bad things you can’t control. Focus on what you can control.
Start simply. Code defensively. User-test the heck out of it. Recognize the chaos. Embrace it. And build resilient web experiences that will work no matter what the internet throws at them.