A (more) Modern CSS Reset - Andy Bell
A solid update to Andy’s four-years old CSS reset. Best of all, every single line comes with an explanation. So if you don’t like the reasoning, don’t use that line.
A solid update to Andy’s four-years old CSS reset. Best of all, every single line comes with an explanation. So if you don’t like the reasoning, don’t use that line.
CSS is better now. It’s not perfect, but it’s better than its ever been, and it’s better than tailwind. Give it another try. Don’t reach for big globs of libraries to paper over the issues you think it has.
This is why it’s so important to re-evaluate technology decisions.
I’ve seen people, lead and principal engineers, who refuse to learn modern JS, insisting that since it was bad in 2006 its bad today. Worse still is some of these people have used their leadership positions to prevent the use of modern JS.
The caching strategy for The Session that I wrote about is working a treat.
There are currently about 45,000 different tune settings on the site. One week after introducing the server-side caching, over 40,000 of those settings are already cached.
But even though it’s currently working well, I’m going to change the caching mechanism.
The eagle-eyed amongst you might have raised an eagle eyebrow when I described how the caching happens:
The first time anyone hits a tune page, the ABCs getting converted to SVGs as usual. But now there’s one additional step. I grab the generated markup and send it as an Ajax payload to an endpoint on my server. That endpoint stores the sheetmusic as a file in a cache.
I knew when I came up with this plan that there was a flaw. The endpoint that receives the markup via Ajax is accepting data from the client. That data could be faked by a malicious actor.
Sure, I’m doing a whole bunch of checks and sanitisation on the server, but there’s always going to be a way of working around that. You can never trust data sent from the client. I was kind of relying on security through obscurity …except it wasn’t even that obscure because I blogged about it.
So I’m switching over to using a headless browser to extract the sheetmusic. You may recall that I wrote:
I could spin up a headless browser, run the JavaScript and take a snapshot. But that’s a bit beyond my backend programming skills.
That’s still true. So I’m outsourcing the work to Browserless.
There’s a reason I didn’t go with that solution to begin with. Like I said, over 40,000 tune settings have already been cached. If I had used the Browserless API to do that work, it would’ve been quite pricey. But now that the flood is over and there’s a just a trickle of caching happening, Browserless is a reasonable option.
Anyway, that security hole has now been closed. Thank you to everyone who wrote in to let me know about it. Like I said, I was aware of it, but it was good to have it confirmed.
Funnily enough, the security lesson here is the same as my conclusion when talking about performance:
If that means shifting the work from the browser to the server, do it!
It looks like it will be a great tool for prototyping. A tool to help developers that don’t have experience with CSS and layout to have a starting point. As someone who spent some time building smoke and mirrors prototypes for UX research, I welcome tools like this.
What concerns me is the assertion that this is production-grade code when it simply is not.
Performance is a high priority for me with The Session. It needs to work for people all over the world using all kinds of devices.
My main strategy for ensuring good performance is to diligently apply progressive enhancement. The core content is available to any device that can render HTML.
To keep things performant, I’ve avoided as many assets (or, more accurately, liabilities) as possible. No uneccessary images. No superfluous JavaScript libraries. Not even any web fonts (gasp!). And definitely no third-party resources.
The pay-off is a speedy site. If you want to see lab data, run a page from The Session through lighthouse. To see field data, take a look at data from Chrome UX Report (Crux).
But the devil is in the details. Even though most pages on The Session are speedy, the outliers have bothered me for a while.
Take a typical tune page on the site. The data is delivered from the server as HTML, which loads nice and quick. That data includes the notes for the tune settings, written in ABC notation, a nice lightweight text format.
Then the enhancement happens. Using Paul Rosen’s brilliant abcjs JavaScript library, those ABCs are converted into SVG sheetmusic.
So on tune pages there’s an additional download for that JavaScript library. That’s not so bad though—I’m using a service worker to cache that file so there’ll only ever be one initial network request.
If a tune has just a few different versions, the page remains nice and zippy. But if a tune has lots of settings, the computation starts to add up. Converting all those settings from ABC to SVG starts to take a cumulative toll on the main thread.
I pondered ways to avoid that conversion step. Was there some way of pre-generating the SVGs on the server rather than doing it all on the client?
In theory, yes. I could spin up a headless browser, run the JavaScript and take a snapshot. But that’s a bit beyond my backend programming skills, so I’ve taken a slightly different approach.
The first time anyone hits a tune page, the ABCs getting converted to SVGs as usual. But now there’s one additional step. I grab the generated markup and send it as an Ajax payload to an endpoint on my server. That endpoint stores the sheetmusic as a file in a cache.
Next time someone hits that page, there’s a server-side check to see if the sheetmusic has been cached. If it has, send that down the wire embedded directly in the HTML.
The idea is that over time, most of the sheetmusic on the site will transition from being generated in the browser to being stored on the server.
So far it’s working out well.
Take a really popular tune like The Lark In The Morning. There are twenty settings, and each one has four parts. Previously that would have resulted in a few seconds of parsing and rendering time on the main thread. Now everything is delivered up-front.
I’m not out of the woods. A page like that with lots of sheetmusic and plenty of comments is going to have a hefty page weight and a large DOM size. I’ve still got a fair bit of main-thread work happening, but now the bulk of it is style and layout, whereas previously I had the JavaScript overhead on top of that.
I’ll keep working on it. But overall, the speed improvement is great. A typical tune page is now very speedy indeed.
It’s like a microcosm of web performance in general: respect your users’ time, network connection and battery life. If that means shifting the work from the browser to the server, do it!
I received this email recently:
Subject: multi-page web apps
Hi Jeremy,
lately I’ve been following you through videos and texts and I’m curious as to why you advocate the use of multi-page web apps and not single-page ones.
Perhaps you can refer me to some sources where your position and reasoning is evident?
Here’s the response I sent…
Hi,
You can find a lot of my reasoning laid out in this (short and free) online book I wrote called Resilient Web Design:
https://resilientwebdesign.com/
The short answer to your question is this: user experience.
The slightly longer answer…
For most use cases, a website (or multi-page app if you prefer) is going to provide the most robust experience for the most number of users. That’s because a user’s web browser takes care of most of the heavy lifting.
Navigating from one page to another? That’s taken care of with links.
Gathering information from a user to process on a server? That’s taken care of with forms.
This frees me up to concentrate on the content and the design without having to reinvent the wheels of links and form fields.
These (let’s call them) multi-page apps are stateless, and for most use cases that’s absolutely fine.
There are some cases where you’d want a state to persist across pages. Let’s say you’re playing a song, or a podcast episode. Ideally you’d want that player to continue seamlessly playing even as the user navigates around the site. In that situation, a single-page app would be a suitable architecture.
But that architecture comes at a cost. Now you’ve got stop the browser doing what it would normally do with links and forms. It’s up to you to recreate that functionality. And you can’t do it with HTML, a robust fault-tolerant declarative language. You need to reimplement all that functionality in JavaScript, a less tolerant, more brittle language.
Then you’ve got to ship all that code to the user before they can use your site. It might be JavaScript code you’ve written yourself or it might be a third-party library designed for building single-page apps. Either way, the user pays a download tax (and a parsing tax, and an execution tax). Whereas with links and forms, all of that functionality is pre-bundled into the user’s web browser.
So that’s my reasoning. At least nine times out of ten, a multi-page approach is leaner, more robust, and simpler.
Like I said, there are times when a single-page approach makes sense—it all comes down to whether state needs to be constantly preserved. But these use cases are the exceptions, not the rule.
That’s why I find the framing of your question a little concerning. It should be inverted. The default approach should be to assume a multi-page approach (which is the way the web works by default). Deciding to take a JavaScript-driven single-page approach should be the exception.
It’s kind of like when people ask, “Why don’t you have children?” Surely the decision to have a child should require deliberation and commitment, rather than the other way around.
When it comes to front-end development, I’m worried that we’ve reached a state where the more complex over-engineered approach is viewed as the default.
I may be committing a fundamental attribution error here, but I think that we’ve reached this point not because of any consideration for users, but rather because of how it makes us developers feel. Perhaps building an old-fashioned website that uses HTML for navigations feels too easy, like it’s beneath us. But building an “app” that requires JavaScript just to render text on a screen feels like real programming.
I hope I’m wrong. I hope that other developers will start to consider user experience first and foremost when making architectural decisions.
Anyway. That’s my answer. User experience.
Cheers,
Jeremy
I knew of most of these front-end development tools (like Utopia, obviously), but some were new to me.
Web Summer Camp in Croatia finished with an interesting discussion. It was labelled a town-hall meeting, but it was more like an Oxford debating club.
Two speakers had two minutes each to speak for or against a particular statement. Their stances were assigned to them so they didn’t necessarily believe what they said.
One of the propositions was something like:
In the future, sustainable design will be as important as UX or performance.
That’s a tough one to argue against! But that’s what Sophia had to do.
She actually made a fairly compelling argument. She said that real impact isn’t going to come from individual websites changing their colour schemes. Real impact is going to come from making server farms run on renewable energy. She advocated for political action to change the system rather than having the responsibility heaped on the shoulders of the individuals making websites.
It’s a fair point. Much like the concept of a personal carbon footprint started life at BP to distract from corporate responsibility, perhaps we’re going to end up navel-gazing into our individual websites when we should be collectively lobbying for real change.
It’s akin to clicktivism—thinking you’re taking action by sharing something on social media, when real action requires hassling your political representative.
I’ve definitely seen some examples of performative sustainability on websites.
For example, at the start of this particular debate at Web Summer Camp we were shown a screenshot of a municipal website that has a toggle. The toggle supposedly enables a low-carbon mode. High resolution images are removed and for some reason the colour scheme goes grayscale. But even if those measures genuinely reduced energy consumption, it’s a bit late to enact them only after the toggle has been activated. Those hi-res images have already been downloaded by then.
Defaults matter. To be truly effective, the toggle needs to work the other way. Start in low-carbon mode, and only download the hi-res images when someone specifically requests them. (Hopefully browsers will implement prefers-reduced-data
soon so that we can have our sustainable cake and eat it.)
Likewise I’ve seen statistics bandied about around the energy-savings that could be made if we used dark colour schemes. I’m sure the statistics are correct, but I’d like to see them presented side-by-side with, say, the energy impact of Google Tag Manager or React or any other wasteful dependencies that impact performance invisibly.
That’s the crux. Most of the important work around energy usage on websites is invisible. It’s the work done to not add more images, more JavaScript or more web fonts.
And it’s not just performance. I feel like the more important the work, the more likely it is to be invisible: privacy, security, accessibility …those matter enormously but you can’t see when a website is secure, or accessible, or not tracking you.
I suspect this is why those areas are all frustratingly under-resourced. Why pour time and effort into something you can’t point at?
Now that I think about it, this could explain the rise of web accessibility overlays. If you do the real work of actually making a website accessible, your work will be invisible. But if you slap an overlay on your website, it looks like you’re making a statement about how much you care about accessibility (even though the overlay is total shit and does more harm than good).
I suspect there might be a similar mindset at work when it comes to interface toggles for low-carbon mode. It might make you feel good. It might make you look good. But it’s a poor substitute for making your website carbon-neutral by default.
Gosh! And I thought I had strong opinions about markup!
A historical record of foundational web development blog posts.
Every one of these 42 articles are gold!
It warms my heart to see Resilient Web Design included in this list.
A great reminder of just how much you can do with modern markup and styles when it comes to form validation. The :user-invalid
and :user-valid
pseudo-classes are particularly handy!
We do quite a bit of prototyping at Clearleft. There’s no better way to reduce risk than to get something in front of users as quickly as possible to test whether you’re on the right track or not.
As Benjamin said in the podcast episode on prototyping:
It’s something to look at, something to prod. And ideally you’re trying to work out what works and what doesn’t.
Sometimes the prototype is mocked up in Figma. Preferably it’s built in code—HTML, CSS, and JavaScript. Having a prototype built in the materials of the medium helps establish a plausible suspension of disbelief during testing.
Also, as Trys said in that same podcast episode:
Prototypical code isn’t production code. It’s quick and it’s often a little bit dirty and it’s not really fit for purpose in that final deliverable. But it’s also there to be inspiring and to gather a team and show that something is possible.
I can’t reiterate that enough: prototype code isn’t production code.
I’ve written about the two different mindsets before:
So these two kinds of work require very different attitudes. For production work, quality is key. For prototyping, making something quickly is what matters.
Addy recently wrote an excellent blog post on the topic of prototyping. The value of a prototype is in the insight it imparts, not the code.
It’s crucial to remember that in a prototype, the code serves merely as a medium—a way to facilitate understanding. It’s a means to an end, not the end itself. The code of a prototype is disposable and mutable. In contrast, the lessons learned from a prototype, the insights gained from user interaction and feedback, are far more durable and impactful.
This!
It can be tempting to re-use code from a prototype. I get it. It seems like a waste to throw away code and build something from scratch. But trust me—and I speak from experience here—it will take more time to wrangle prototype code into something that’s production-ready.
The problem is that quality is often invisible. Think about semantics, performance, security, privacy, and accessibility. Those matter—for production code—but they’re under the surface. For someone who doesn’t understand the importance of those hidden qualities a prototype that looks like it works seems ready to ship. It’s understandable that they’d balk at the idea of just throwing that code away and writing new code. Sometimes the suspension of disbelief that a prototype is aiming for works too well.
As is so often the case, this isn’t a technical problem. It’s a communication issue.
I don’t think most people using React on a regular basis realize quite how much it’s fallen behind.
Following on from Josh’s earlier post where he said “React isn’t great at anything except being popular”, here are the details.
Every decision React’s made since its inception circa 2013 is another layer of tech debt—one that its newer contemporaries aren’t constrained by.
This is particularly damning:
No other modern frontend framework is as stubbornly incompatible with the platform as React is.
The good news:
React is a bit like a git branch that’s fallen well behind
main
. You might not realize it, if React is the star your galaxy orbits around, but…well, frontend has moved on. The ecosystem has taken those ideas and run with them to make things that are even better.
Last week Phil posted a little update about his excellent site, ooh.directory:
If you’re in the habit of visiting the Recently Updated Blogs page, and leaving it open, the times when each blog was updated will now keep up with the relentless passing of time.
Does that make sense? “3 minutes ago” will change to “4 minutes ago” and so on and on and on, until you refresh the page.
I thought that was a nice little addition, and I immediately thought of The Session. There are time
elements all over the site with relative times as the text content: 2 minutes ago, 7 hours ago, 1 year ago, and so on. Those strings of text are generated on the server. But I figured it would be nice enhancement to periodically update them in the browser after the page has loaded.
I viewed source to see how Phil was doing it. The code is nice and short, using a library called Day.js with a plug-in for relative time.
“Hang on”, I thought, “isn’t there some web standard for doing this kind of thing?” I had a vague memory of some JavaScript API for formatting dates and times.
Sure enough, we’ve now got the Intl.RelativeTimeFormat
object. It’s got browser support across the board.
I’ve got a function that loops through all the time
elements with datetime
attributes. It compares the current timestamp to that value to get the elapsed time. Then that’s formatted using the format()
method and output as innerText
.
You need to tell the format()
method which units you want to use: seconds, minutes, hours, days, etc. So there’s a little bit of looping to figure out which unit is most appropriate. If the elapsed time is less than a minute, use seconds. If the elapsed time is less than an hour, use minutes. If the elapsed time is less than a day, use hours. You get the idea.
It’s a pity there isn’t some kind of magic unit like “auto” to do this, but it’s not much extra code to figure it out.
Anyway, that function runs periodically using setInterval()
. I’ve set it to run every 30 seconds in my gist. On The Session I’ve set it to one minute.
You’ll notice that I’m grabbing all the relevant time
elements—using document.querySelector('time[datetime]')
—every time the function is run. That may seem inefficient. Couldn’t I just grab them once and then keep them stored as an array? But I want this to work even if the page contents have been updated with Ajax. (Do people even say “Ajax” any more? Get off my lawn, you pesky kids!)
I think I’ve written this code in an abstract way so that you should be able to drop it into any web page. For the calculations to work, you’ll need to either make sure that your datetime
attributes are using timezones. Or, if there’s no timezone info, UTC is assumed.
This was a fun little piece of functionality to play around with. Now I know a little more about this Intl.RelativeTimeFormat
object. The way I’m using it as a classic example of progressive enhancement. If a browser doesn’t support it, or if my code breaks, it’s no big deal. The funtionality is a little bonus that almost nobody will notice anyway. Just a small delighter …if you’re the kind of person who finds it delightful when relative time strings automatically update.
I enjoyed this self-documenting journey of exploration.
A plea to let users do web things on websites. In other words, stop over-complicating everything with buckets of JavaScript.
Honestly, this isn’t wishlist isn’t asking for much, and it’s a damning indictment of “modern” frontend development that we’ve come to this:
- Let me copy text so I can paste it.
- If something navigates like a link, let me do link things.
- …
Annalee Newitz:
When we imagine future tech, we usually focus on the ways it could turn humans into robotic workers, easily manipulated by surveillance capitalism. And that’s not untrue. But in this story, I wanted to suggest that there is a more subversive possibility. Modifying our bodies with technology could bring us closer to the natural world.
A great talk from Addy on just how damaging client-side JavaScript can be to the user experience …and what you can do about it.
I learned that geeks think they are rational beings, while they are completely influenced by buzz, marketing, and their emotions. Even more so than the average person, because they believe they are less susceptible to it than normies, so they have a blind spot.