Remix and the Alternate Timeline of Web Development - Jim Nielsen’s Blog
It sounds like Remix takes a sensible approach to progressive enhancement.
It sounds like Remix takes a sensible approach to progressive enhancement.
There are two JavaScripts.
One for the server - where you can go wild.
One for the client - that should be thoughtful and careful.
Yes! This! I’m always astounded to see devs apply the same mindset to backend and frontend development, just because it happens to be in the same language. I don’t care what you use on your own machine or your own web server, but once you’re sending something down the wire to end users, you need to prioritise their needs over your own.
It’s the JavaScript on the client side that’s the problem. What’s given to the visitor.
I’d ask you, if you’re still reading, that you consider a separation of JavaScript between client and server. If you’re a dev, consider the payload, your bundle and work to reduce the cost to your visitor. Heck, think progressive enhancement.
The primary benefit of Progressive Enhancement is not that “your app works without JavaScript” (though that’s a nice side-benefit) but rather that the mental model is drastically simpler.
I think that’s the primary benefit to developers. The primary benefit to users is that what you build will faster and more resilient.
Anyway, this is a really good deep dive into different architectural choices for building on the web. Although I was surprised by this assertion in the first paragraph:
The most popular architecture employed by web developers today is the Single Page App (SPA)
Citation needed. Single Page Apps do indeed dominate the discussion, but I don’t think that necessarily matches the day-to-day reality.
While I’ve always been bothered by the downsides of SPAs, I always thought the gap would be bridged sooner or later, and that performance concerns would eventually vanish thanks to things like code splitting, tree shaking, or SSR. But ten years later, many of these issues remain. Many SPA bundles are still bloated with too many dependencies, hydration is still slow, and content is still duplicated in memory on the client even if it already lives in the DOM.
Yet something might be changing: for whatever reason, it feels like people are finally starting to take note and ask why things have to be this way.
Interesting to see a decade-long perspective. I especially like how Sacha revisits and reasseses design principles from ten years ago:
- Data on the Wire. Don’t send HTML over the network. Send data and let the client decide how to render it.
Verdict: 👎
It’s since become apparent that you often do need to send HTML over the network, and things seem to be moving back towards handling as much as possible of your HTML compilation on the server, not on the client.
Prompted by my talk, The State Of The Web, Brian zooms out to get some perspective on how browser power is consolidated.
The web is made of clients and servers. There’s a huge amount of diversity in the server space but there’s very little diversity when it comes to clients because making a browser has become so complex and expensive.
But Brian hopes that this complexity and expense could be distributed amongst a large amount of smaller players.
10 companies agreeing to invest $10k apiece to advance and maintain some area of shared interest is every bit as useful as 1 agreeing to invest $100k generally. In fact, maybe it’s more representative.
We believe that there is a very long tail of increasingly smaller companies who could do something, if only they coordinated to fund it together. The further we stretch this out, the more sources we enable, the more its potential adds up.
This is a terrific and nuanced talk that packs a lot into less than twenty minutes.
I heartily concur with Rich’s assessment that most websites aren’t apps or documents but something in between. It’s a continuum. And I really like Rich’s proposed approach: transitional web apps.
(The secret sauce in transitional web apps is progressive enhancement.)
One of the other arguments we hear in support of the SPA is the reduction in cost of cyber infrastructure. As if pushing that hosting burden onto the client (without their consent, for the most part, but that’s another topic) is somehow saving us on our cloud bills. But that’s ridiculous.
Sensible advice from Chris:
So what’s the best rendering method? Whatever works best for you, but perhaps a hierarchy like this makes some general sense:
- Static HTML as much as you can
- Edge functions over static HTML so you can do whatever dynamic things
- Server generated HTML what you have to after that
- Client-side render only what you absolutely have to
I do think large tech companies employ JavaScript frameworks because, amongst other things, it saves them money at their scale. And what Big Tech does trickles down in the form of default choices for many others (“they’re doing it and are insanely successful, so mimicking them can’t be a bad idea”). However, the scale at which smaller projects operate doesn’t necessarily translate to the same kind of cost savings.
If the browser needs to parse 296kb of JavaScript to show a list of blog posts, that’s not Progressive Enhancement, it’s using the wrong tool for the job.
A good explanation of the hydration problem in tools like Gatsby.
JavaScript is a powerful language that can do some incredible things, but it’s incredibly easy to jump to using it too early in development, when you could be using HTML and CSS instead.
Garrett’s observation is spot-on here:
I’ve been trying to understand the appeal of these frameworks by giving them an objective chance. I’ve expanded my knowledge of JavaScript and tried to give them the benefit of the doubt. They do have their places, but the only explanation I can come up with is that developers are taking a similar approach as Ruby and focusing on developer convenience and productivity. Only, instead of Ruby’s performance being tied to the CPU level, JavaScript frameworks push the performance burden to the client.
In both cases, the tradeoff happens in the name of developer happiness and productivity, but the strategies have entirely different consequences. With Ruby, the CPU is still (mostly) the responsibility of the development team, and it can be upgraded. With JavaScript, the page weight becomes an externality pushed onto visitors.
JavaScript is fickle. It can fail to load. It can be disabled. It can be blocked. It can fail to run. It probably is fine most of the time, but when it fails, everything tends to go bad. And having such a hard point of failure is not ideal.
This is a very important point:
It’s important not to try making the no-JS experience work like the full one. The interface has to be revisited. Some features might even have to be removed, or dramatically reduced in scope. That’s also okay. As long as the main features are there and things work nicely, it should be fine that the experience is not as polished.
This is a great little technique from Remy: when a service worker is being installed, you make sure that the page(s) the user is first visiting get added to a cache.
Chris broke both his arms just to avoid speaking at the JAMstack conference in London. Seems a bit extreme to me.
Anyway, to make up for not being there, he made a website of his talk. It’s good stuff, tackling the split.
It’s cool to see the tech around our job evolve to the point that we can reach our arms around the whole thing. It’s worthy of some concern when we feel like complication of web technology feels like it’s raising the barrier to entry
A great bucketload of common sense from Jake:
Rather than copying bad examples from the history of native apps, where everything is delivered in one big lump, we should be doing a little with a little, then getting a little more and doing a little more, repeating until complete. Think about the things users are going to do when they first arrive, and deliver that. Especially consider those most-likely to arrive with empty caches.
And here’s a good way of thinking about that:
I’m a fan of progressive enhancement as it puts you in this mindset. Continually do as much as you can with what you’ve got.
All too often, saying “use the right tool for the job” is interpreted as “don’t use that tool!” but as Jake reminds us, the sign of a really good tool is its ability to adapt instead of demanding rigid usage:
Netflix uses React on the client and server, but they identified that the client-side portion wasn’t needed for the first interaction, so they leaned on what the browser can already do, and deferred client-side React. The story isn’t that they’re abandoning React, it’s that they’re able to defer it on the client until it’s was needed. React folks should be championing this as a feature.
A good analysis, but my takeaway was that the article could equally be called Why it’s tricky to measure Client-side Rendering performance. In a nutshell, just looking at metrics can be misleading.
Pre-classified metrics are a good signal for measuring performance. At the end of the day though, they may not properly reflect your site’s performance story. Profile each possibility and give it the eye test.
And it’s always worth bearing this in mind:
The best way to prioritize content by building a static site. Ask yourself if the content needs JavaScript.
This article makes a good point about client-rendered pages:
Asynchronously loaded page elements shift click targets, resulting in a usability nightmare.
…but this has nothing, absolutely nothing to do with progressive web apps.
More fuel for the fire of evidence that far too many people think that progressive web apps and single page apps are one and the same.
Rachel uncovers a great phrase for dealing with older browsers:
It isn’t your fault, but it is your problem.
She points to multiple ways of using CSS Grid today while still providing a decent experience for older browsers.
Crucially, there’s one message that hasn’t changed in fifteen years:
Websites do not need to look the same in every browser.
It’s crazy that there are still designers and developers who haven’t internalised this. And before anyone starts claiming that the problem is with the clients and the bosses, Rachel has plenty of advice for talking with them too.
Your job is to learn about new things, and advise your client or your boss in the best way to achieve their business goals through your use of the available technology. You can only do that if you have learned about the new things. You can then advise them which compromises are worth making.
Phil describes the process of implementing the holy grail of web architecture (which perhaps isn’t as difficult as everyone seems to think it is):
I have been experimenting with something that seemed obvious to me for a while. A web development model which gives a pre-rendered, ready-to-consume, straight-into-the-eyeballs web page at every URL of a site. One which, once loaded, then behaves like a client-side, single page app.
Now that’s resilient web design!
It is possible to use React without relying completely on client-side JavaScript to render all your content—though it’s certainly not the default way most tutorials teach React. This ongoing tutorial aims to redress that imbalance.
Besides the main benefit of server rendering giving faster page loads, it also enables large amounts of the site to run without JavaScript. There are many reasons why you would want this, but my personal reasons are that it allows you to completely drop support JavaScript in older browsers, but still have the site function.