Indiekit
Paul’s indie web project is live!
Meet the little Node.js server with all the parts needed to publish content to your personal website and share it on social networks.
You can read the accompanying blog post.
Paul’s indie web project is live!
Meet the little Node.js server with all the parts needed to publish content to your personal website and share it on social networks.
You can read the accompanying blog post.
It sounds like Remix takes a sensible approach to progressive enhancement.
Do you still miss Google Reader, almost a decade after it was shut down? It’s back!
A Mastodon server is a feed reader, shared by everyone who uses that server.
I really like Simon’s description of the fediverse:
A Mastodon server (often called an instance) is just a shared blog host. Kind of like putting your personal blog in a folder on a domain on shared hosting with some of your friends.
Want to go it alone? You can do that: run your own dedicated Mastodon instance on your own domain.
This is spot-on:
Mastodon is just blogs and Google Reader, skinned to look like Twitter.
There are two JavaScripts.
One for the server - where you can go wild.
One for the client - that should be thoughtful and careful.
Yes! This! I’m always astounded to see devs apply the same mindset to backend and frontend development, just because it happens to be in the same language. I don’t care what you use on your own machine or your own web server, but once you’re sending something down the wire to end users, you need to prioritise their needs over your own.
It’s the JavaScript on the client side that’s the problem. What’s given to the visitor.
I’d ask you, if you’re still reading, that you consider a separation of JavaScript between client and server. If you’re a dev, consider the payload, your bundle and work to reduce the cost to your visitor. Heck, think progressive enhancement.
The primary benefit of Progressive Enhancement is not that “your app works without JavaScript” (though that’s a nice side-benefit) but rather that the mental model is drastically simpler.
I think that’s the primary benefit to developers. The primary benefit to users is that what you build will faster and more resilient.
Anyway, this is a really good deep dive into different architectural choices for building on the web. Although I was surprised by this assertion in the first paragraph:
The most popular architecture employed by web developers today is the Single Page App (SPA)
Citation needed. Single Page Apps do indeed dominate the discussion, but I don’t think that necessarily matches the day-to-day reality.
While I’ve always been bothered by the downsides of SPAs, I always thought the gap would be bridged sooner or later, and that performance concerns would eventually vanish thanks to things like code splitting, tree shaking, or SSR. But ten years later, many of these issues remain. Many SPA bundles are still bloated with too many dependencies, hydration is still slow, and content is still duplicated in memory on the client even if it already lives in the DOM.
Yet something might be changing: for whatever reason, it feels like people are finally starting to take note and ask why things have to be this way.
Interesting to see a decade-long perspective. I especially like how Sacha revisits and reasseses design principles from ten years ago:
- Data on the Wire. Don’t send HTML over the network. Send data and let the client decide how to render it.
Verdict: 👎
It’s since become apparent that you often do need to send HTML over the network, and things seem to be moving back towards handling as much as possible of your HTML compilation on the server, not on the client.
It me:
Broadly, these are websites which are still web pages, not web applications; they’re pages of essentially static information, personal websites, blogs, and so on, but they are slightly dynamic. They might have a style selector at the top of each page, causing a cookie to be set, and the server to serve a different stylesheet on every subsequent page load.
This rings sadly true to me:
Suppose a company makes a webpage for looking up products by their model number. If this page were made in 2005, it would probably be a single PHP page. It doesn’t need a framework — it’s one SELECT query, that’s it. If this page were made in 2022, a conundrum will be faced: the company probably chose to use a statically generated website. The total number of products isn’t too large, so instead their developers stuff a gigantic JSON file of model numbers for every product made by the company on the website and add some client-side JavaScript to download and query it. This increases download sizes and makes things slower, but at least you didn’t have to spin up and maintain a new application server. This example is fictitious but I believe it to be representative.
Also, I never thought about “serverless” like this:
Recently we’ve seen the rise in popularity of AWS Lambda, a “functions as a service” provider. From my perspective this is literally a reinvention of CGI, except a) much more complicated for essentially the same functionality, b) with vendor lock-in, c) with a much more complex and bespoke deployment process which requires the use of special tools.
You had me at “beautifully resilient apps with progressive enhancement”.
This is a great clear walkthrough of enhancing a form submission. A lot of this seems like first principles to me, but if you’ve only ever built single page apps, then thinking about a server-submission process first might well be revelatory.
This is a very nifty use of a service worker—choose a local folder that you want to navigate using HTTP rather than the file system.
This is a terrific and nuanced talk that packs a lot into less than twenty minutes.
I heartily concur with Rich’s assessment that most websites aren’t apps or documents but something in between. It’s a continuum. And I really like Rich’s proposed approach: transitional web apps.
(The secret sauce in transitional web apps is progressive enhancement.)
Taking the indie web to the next level—self-hosting on your own hardware.
Tired of Big Tech monopolies, a community of hobbyists is taking their digital lives off the cloud and onto DIY hardware that they control.
Simon describes the pattern he uses for content sites to get all of the resilience of static site generators while keeping dynamic functionality.
Lou’s idea was just for a server to remember the last state of a browser’s interaction with it. But that one move—a server putting a cookie inside every visiting browser—crossed a privacy threshold: a personal boundary that should have been clear from the start but was not.
Once that boundary was crossed, and the number and variety of cookies increased, a snowball started rolling, and whatever chance we had to protect our privacy behind that boundary, was lost.
The Doctor is incensed.
At this stage of the Web’s moral devolution, it is nearly impossible to think outside the cookie-based fecosystem.
This website is hosted across a network of solar powered servers and is sent to you from wherever there is the most sunshine.
One of the other arguments we hear in support of the SPA is the reduction in cost of cyber infrastructure. As if pushing that hosting burden onto the client (without their consent, for the most part, but that’s another topic) is somehow saving us on our cloud bills. But that’s ridiculous.
Sensible advice from Chris:
So what’s the best rendering method? Whatever works best for you, but perhaps a hierarchy like this makes some general sense:
- Static HTML as much as you can
- Edge functions over static HTML so you can do whatever dynamic things
- Server generated HTML what you have to after that
- Client-side render only what you absolutely have to
Chris shares his thoughts on the ever-widening skillset required of a so-called front-end developer.
Interestingly, the skillset he mentions half way through (which is what front-end devs used to need to know) really appeals to me: accessibility, performance, responsiveness, progressive enhancement. But the list that covers modern front-end dev sounds more like a different mindset entirely: APIs, Content Management Systems, business logic …the back of the front end.
And Chris doesn’t even touch on the build processes that front-end devs are expected to be familiar with: version control, build pipelines, package management, and all that crap.
I wish we could return to this:
The bigger picture is that as long as the job is building websites, front-enders are focused on the browser.
I do think large tech companies employ JavaScript frameworks because, amongst other things, it saves them money at their scale. And what Big Tech does trickles down in the form of default choices for many others (“they’re doing it and are insanely successful, so mimicking them can’t be a bad idea”). However, the scale at which smaller projects operate doesn’t necessarily translate to the same kind of cost savings.
I probably need to upgrade the Huffduffer server but Maciej nails why that’s an intimidating prospect:
Doing this on a live system is like performing kidney transplants on a playing mariachi band. The best case is that no one notices a change in the music; you chloroform the players one at a time and try to keep a steady hand while the band plays on. The worst case scenario is that the music stops and there is no way to unfix what you broke, just an angry mob. It is very scary.
This is an interesting project to try to rank web hosts by performance:
Real-world server response (Time to First Byte) latencies, as experienced by real-world users navigating the web.