EStimator.dev: the modern JavaScript savings calculator
Find out how much smaller your JavaScript could be.
Find out how much smaller your JavaScript could be.
There’s no browser support yet but that doesn’t mean we can’t start adding prefers-reduced-data
to our media queries today. I like the idea of switching between web fonts and system fonts.
If you’re using web fonts, there are good performance (and privacy) reasons for hosting your own font files. And fortunately, Google Fonts gives you that option. There’s a “Download family” button on every specimen page.
But if you go ahead and download a font family from Google Fonts, you’ll notice something a bit odd. The .zip file only contains .ttf files. You can serve those on the web, but it’s far from the best choice. Woff2 is far leaner in file size.
This means you need to manually convert the downloaded .ttf files into .woff or .woff2 files using something like Font Squirrel’s generator. That’s fine, but I’m curious as to why this step is necessary. Why doesn’t Google Fonts provide .woff or .woff2 files in the downloaded folder? After all, if you choose to use Google Fonts as a third-party hosting service for your fonts, it most definitely serves up the appropriate file formats.
I thought maybe it was something to do with the licensing. Maybe some licenses only allow for unmodified truetype files to be distributed? But I’ve looked at fonts with different licenses—some have Apache 2 licensing, some have Open Font licensing—and they’re all quite permissive and definitely allow for modification.
Maybe the thinking is that, if you’re hosting your own font files, then you know what you’re doing and you should be able to do your own file conversion and subsetting. But I’ve come across more than one website in the wild serving up .ttf files. And who can blame them? They want to host their own font files. They downloaded those files from Google Fonts. Why shouldn’t they assume that they’re good to go?
It’s all a bit strange. If anyone knows why Google Fonts only provides .ttf files for download, please let me know. In a pinch, I will also accept rampant speculation.
Trys also pointed out some weird default behaviour if you do let Google Fonts do the hosting for you. Specifically if it’s a variable font. Let’s say it’s a font with weight as a variable axis. You specify in advance which weights you’ll be using, and then it generates separate font files to serve for each different weight.
Doesn’t that defeat the whole point of using a variable font? I mean, I can see how it could result in smaller file sizes if you’re just using one or two weights, but isn’t half the fun of having a weight axis that you can go crazy with as many weights as you want and it’s all still one font file?
Like I said, it’s all very strange.
There’s a new image format on the browser block and it’s very performant indeed. Jake has all the details you didn’t ask for.
How and when did I get to the point where I would consider a page weight of 4 MB on a large page and 500 KB on a small page normal?
This isn’t just a well-earned rant from Manuel. I mean, it *is that, but it’s also packed with practical performance advice.
It’s time for a look at the state of the web when it comes to JavaScript usage. Here’s the report powered by data from HTTP Archive:
JavaScript is the most costly resource we send to browsers; having to be downloaded, parsed, compiled, and finally executed. Although browsers have significantly decreased the time it takes to parse and compile scripts, download and execution have become the most expensive stages when JavaScript is processed by a web page.
Sending smaller JavaScript bundles to the browser is the best way to reduce download times, and in turn improve page performance. But how much JavaScript do we really use?
When it comes to frameworks and UI libraries, there are some interesting numbers. Given the volume of chatter in the dev world, you’d be forgiven for thinking that React is used on the majority of websites today. The real number? 4.6% of websites. That’s less than the number of websites using CSS custom properties.
This is reminding me of what I wrote about dev perception.
Worlds of scarcity are made out of things. Worlds of abundance are made out of dependencies. That’s the software playbook: find a system made of costly, redundant objects; and rearrange it into a fast, frictionless system made of logical dependencies. The delta in performance is irresistible, and dependencies are a compelling building block: they seem like just a piece of logic, with no cost and no friction. But they absolutely have a cost: the cost is complexity, outsourced agency, and brittleness. The cost of ownership is up front and visible; the cost of access is back-dated and hidden.
This looks like a nice way to get a blog up and running:
Blot turns a folder into a blog. Drag-and-drop files inside to publish. Images, text files, Word Documents, Markdown and more become blog posts automatically.
Jason describes the next big thing in web typography: streaming fonts!
…to enable the ability for only the required part of the font be downloaded on any given page, and for subsequent requests for that font to dynamically ‘patch’ the original download with additional sets of glyphs as required on successive page views—even if they occur on separate sites.
This is my kind of URL nerdery. Remy ponders all the permutations of URLs ending with slashes, ending without slashes, ending with with a file extension…
This’ll be handy the next time I want to send someone a file: drop it in here, and then paste the link into a DM/chat.
Following on from that proposal for a browser feature that I linked to yesterday, Tim thinks through all the permutations and possibilities of user agents allowing users to throttle resources:
If a limit does get enforced (it’s important to remember this is still a big if right now), as long as it’s handled with care I can see it being an excellent thing for the web that prioritizes users, while still giving developers the ability to take control of the situation themselves.
Drag and drop a file up to 400MB and share the URL without a log-in (the URLs are using What Three Words).
I’ve been playing around with the newly-released Squoosh, the spiritual successor to Jake’s SVGOMG. You can drag images into the browser window, and eyeball the changes that any optimisations might make.
On a project that Cassie is working on, it worked really well for optimising some JPEGs. But there were a few images that would require a bit more fine-grained control of the optimisations. Specifically, pictures with human faces in them.
I’ve written about this before. If there’s a human face in image, I open that image in a graphics editing tool like Photoshop, select everything but the face, and add a bit of blur. Because humans are hard-wired to focus on faces, we’ll notice any jaggy artifacts on a face, but we’re far less likely to notice jagginess in background imagery: walls, materials, clothing, etc.
On the face of it (hah!), a browser-based tool like Squoosh wouldn’t be able to optimise for faces, but then Cassie pointed out something really interesting…
When we were both at FFConf on Friday, there was a great talk by Eleanor Haproff on machine learning with JavaScript. It turns out there are plenty of smart toolkits out there, and one of them is facial recognition. So I wonder if it’s possible to build an in-browser tool with this workflow:
Maybe the selecting/blurring part would need canvas? I don’t know.
Anyway, I thought this was a brilliant bit of synthesis from Cassie, and now I’ve got two questions:
A handy in-browser image compression tool. Drag, drop, tweak, and export.
I remember Jason telling me about this weird service worker caching behaviour a little while back. This piece is a great bit of sleuthing in tracking down the root causes of this strange issue, followed up with a sensible solution.
In Going Offline, I dive into the many different ways you can use a service worker to handle requests. You can filter by the URL, for example; treating requests for pages under /blog
or /articles
differently from other requests. Or you can filter by file type. That way, you can treat requests for, say, images very differently to requests for HTML pages.
One of the ways to check what kind of request you’re dealing with is to see what’s in the accept
header. Here’s how I show the test for HTML pages:
if (request.headers.get('Accept').includes('text/html')) {
// Handle your page requests here.
}
So, logically enough, I show the same technique for detecting image requests:
if (request.headers.get('Accept').includes('image')) {
// Handle your image requests here.
}
That should catch any files that have image
in the request’s accept
header, like image/png
or image/jpeg
or image/svg+xml
and so on.
But there’s a problem. Both Safari and Firefox now use a much broader accept
header: */*
My if
statement evaluates to false
in those browsers. Sebastian Eberlein wrote about his workaround for this issue, which involves looking at file extensions instead:
if (request.url.match(/\.(jpe?g|png|gif|svg)$/)) {
// Handle your image requests here.
}
So consider this post a patch for chapter five of Going Offline (page 68 specifically). Wherever you see:
if (request.headers.get('Accept').includes('image'))
Swap it out for:
if (request.url.match(/\.(jpe?g|png|gif|svg)$/))
And feel to add any other image file extensions (like webp
) in there too.
Handy web-based tools—compress HTML, CSS, and JavaScript, and convert files from one format to another.
A step-by-step guide to implementing drag’n’drop, and image previews with the Filereader API. No libraries or frameworks were harmed in the making of this article.
This blog post saved my ass—the Huffduffer server was b0rked and after much Duck-Duck-Going I found the answer here.
I’m filing this away for my future self because, as per Murphy’s Law, I’m pretty sure I’ll be needing this again at some point