Tags: performance



Monday, January 22nd, 2018

Bad Month for the Main Thread - daverupert.com

JavaScript is CPU intensive and the CPU is the bottleneck for performance.

I’m on Team Dave.

But darn it all, I just want to build modular websites using HTML and a little bit of JavaScript.

Saturday, January 20th, 2018

Needs must

I got a follow-up comment to my follow-up post about the follow-up comment I got on my original post about Google Analytics. Keep up.

I made the point that, from a front-end performance perspective, server logs have no impact whereas a JavaScript-based analytics solution must have some impact on the end user. Paul Anthony says:

Google won the analytics war because dropping one line of JS in the footer and handing a tried and tested interface to customers is an obvious no brainer in comparison to setting up an open source option that needs a cron job to parse the files, a database to store the results and doesn’t provide mobile interface.

Good point. Dropping one snippet of JavaScript into your front-end codebase is certainly an easier solution …easier for you, that is. The cost is passed on to your users. This is a classic example of where user needs and developer needs are in opposition. I’ve said it before and I’ll say it again:

Given the choice between making something my problem, and making something the user’s problem, I’ll choose to make it my problem every time.

It’s true that this often means doing more work. That’s why it’s called work. This is literally what our jobs are supposed to entail: we put in the work to make life easier for users. We’re supposed to be saving them time, not passing it along.

The example of Google Analytics is pretty extreme, I’ll grant you. The cost to the user of adding that snippet of JavaScript—if you’ve configured things reasonably well—is pretty small (again, just from a performance perspective; there’s still the cost of allowing Google to track them across domains), and the cost to you of setting up a comparable analytics system based on server logs can indeed be disproportionately high. But this tension between user needs and developer needs is something I see play out again and again.

I’ve often thought the HTML design principle called the priority of constituencies could be adopted by web developers:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. In other words costs or difficulties to the user should be given more weight than costs to authors.

In Resilient Web Design, I documented the three-step approach I take when I’m building anything on the web:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Now I’m wondering if I should’ve clarified that second step further. When I talk about choosing “the simplest possible technology”, what I mean is “the simplest possible technology for the user”, not “the simplest possible technology for the developer.”

For example, suppose I were going to build a news website. The core functionality is fairly easy to identify: providing the news. Next comes the step where I choose the simplest possible technology. Now, if I were a developer who had plenty of experience building JavaScript-driven single page apps, I might conclude that the simplest route for me would be to render the news via JavaScript. But that would be a fragile starting point if I’m trying to reach as many people as possible (I might well end up building a swishy JavaScript-driven single page app in step three, but step two should almost certainly be good ol’ HTML).

Time and time again, I see decisions that favour developer convenience over user needs. Don’t get me wrong—as a developer, I absolutely want developer convenience …but not at the expense of user needs.

I know that “empathy” is an over-used word in the world of user experience and design, but with good reason. I think we should try to remind ourselves of why we make our architectural decisions by invoking who those decisions benefit. For example, “This tech stack is best option for our team”, or “This solution is the best for the widest range of users.” Then, given the choice, favour user needs in the decision-making process.

There will always be situations where, given time and budget constraints, we end up choosing solutions that are easier for us, but not the best for our users. And that’s okay, as long as we acknowledge that compromise and strive to do better next time.

But when the best solutions for us as developers become enshrined as the best possible solutions, then we are failing the people we serve.

That doesn’t mean we must become hairshirt-wearing martyrs; developer convenience is important …but not as important as user needs. Start with user needs.

Friday, January 19th, 2018


I wrote about Google Analytics yesterday. As usual, I syndicated the post to Ev’s blog, and I got an interesting response over there. Kelly Burgett set me straight on some of the finer details of how goals work, and finished with this thought:

You mention “delivering a performant, accessible, responsive, scalable website isn’t enough” as if it should be, and I have to disagree. It’s not enough for a business to simply have a great website if you are unable to understand performance of channel marketing, track user demographics and behavior on-site, and optimize your site/brand based on that data. I’ve seen a lot of ugly sites who have done exceptionally well in terms of ROI, simply because they are getting the data they need from the site in order make better business decisions. If your site cannot do that (ie. through data collection, often third party scripts), then your beautifully-designed site can only take you so far.

That makes an excellent case for having analytics. But that’s not necessarily the same as having Google analytics, or even JavaScript-driven analytics at all.

By far the most useful information you get from analytics is around where people have come from, where did they go next, and what kind of device are they using. None of that information requires JavaScript. It’s all available from your server logs.

I don’t want to come across all old-man-yell-at-cloud here, but I’m trying to remember at what point self-hosted software for analysing your log traffic became not good enough.

Here’s the thing: logging on the server has no effect on the user experience. It’s basically free, in terms of performance. Logging via JavaScript, by its very nature, has some cost. Even if its negligible, that’s one more request, and that’s one more bit of processing for the CPU.

All of the data that you can only get via JavaScript (in-page actions, heat maps, etc.) are, in my experience, better handled by dedicated software. To me, that kind of more precise data feels different to analytics in the sense of funnels, conversions, goals and all that stuff.

So in order to get more fine-grained data to analyse, our analytics software has now doubled down on a technology—JavaScript—that has an impact on the end user, where previously the act of observation could be done at a distance.

There are also blind spots that come with JavaScript-based tracking. According to Google Analytics, 0% of your customers don’t have JavaScript. That’s not necessarily true, but there’s literally no way for Google Analytics—which relies on JavaScript—to even do its job in the absence of JavaScript. That can lead to a dangerous situation where you might be led to think that 100% of your potential customers are getting by, when actually a proportion might be struggling, but you’ll never find out about it.

Related: according to Google Analytics, 0% of your customers are using ad-blockers that block requests to Google’s servers. Again, that’s not necessarily a true fact.

So I completely agree than analytics are a good thing to have for your business. But it does not follow that Google Analytics is a good thing for your business. Other options are available.

I feel like the assumption that “analytics = Google Analytics” is like the slippery slope in reverse. If we’re all agreed that analytics are important, then aren’t we also all agreed that JavaScript-based tracking is important?

In a word, no.

This reminds me of the arguments made in favour of intrusive, bloated advertising scripts. All of the arguments focus on the need for advertising—to stay in business, to pay the writers—which are all great reasons for advertising, but have nothing to do with JavaScript, which is at the root of the problem. Everyone I know who uses an ad-blocker—including me—doesn’t use it to stop seeing adverts, but to stop the performance of the page being degraded (and to avoid being tracked across domains).

So let’s not confuse the means with the ends. If you need to have advertising, that doesn’t mean you need to have horribly bloated JavaScript-based advertising. If you need analytics, that doesn’t mean you need an analytics script on your front end.

Thursday, January 18th, 2018

Finding Dead CSS – CSS Wizardry

Here’s a clever idea from Harry if you’re willing to play the long game in tracking down redundant CSS—add a transparent background image to the rule block and then sit back and watch your server logs for any sign of that sleeper agent ever getting activated.

If you do find entries for that particular image, you know that, somehow, the legacy feature is potentially still accessible—the number of entries should give you a clue as to how severe the problem might be.

Analysing analytics

Hell is other people’s JavaScript.

There’s nothing quite so crushing as building a beautifully performant website only to have it infested with a plague of third-party scripts that add to the weight of each page and reduce the responsiveness, making a mockery of your well-considered performance budget.

Trent has been writing about this:

My latest realization is that delivering a performant, accessible, responsive, scalable website isn’t enough: I also need to consider the impact of third-party scripts.

He’s started the process by itemising third-party scripts. Frustratingly though, there’s rarely one single culprit that you can point to—it’s the cumulative effect of “just one more beacon” and “just one more analytics script” and “just one more A/B testing tool” that adds up to a crappy experience that warms your user’s hands by ensuring your site is constantly draining their battery.

Actually, having just said that there’s rarely one single culprit, Adobe Tag Manager is often at the root of third-party problems. That and adverts. It’s like opening the door of your beautifully curated dream home, and inviting a pack of diarrhetic elephants in: “Please, crap wherever you like.”

But even the more well-behaved third-party scripts can get out of hand. Google Analytics is so ubiquitous that it’s hardly even considered in the list of potentially harmful third-party scripts. On the whole, it’s a fairly well-behaved citizen of your site’s population of third-party scripts (y’know, leaving aside the whole surveillance capitalism business model that allows you to use such a useful tool for free in exchange for Google tracking your site’s visitors across the web and selling the insights from that data to advertisers).

The initial analytics script that you—asynchronously—load into your page isn’t very big. But depending on how you’ve configured your Google Analytics account, that might just be the start of a longer chain of downloads and event handlers.

Ed recently gave a lunchtime presentation at Clearleft on using Google Analytics—he professes modesty but he really knows his stuff. He was making sure that everyone knew how to set up goals’n’stuff.

As I understand it, there are two main categories of goals: events and destinations (there are also durations and pages, but they feel similar to destinations). You use events to answer questions like “Did the user click on this button?” or “Did the user click on that search field?”. You use destinations to answer questions like “Did the user arrive at this page?” or “Did the user come from that page?”

You can add as many goals to your site’s analytics as you want. That’s an intoxicating offer. The problem is that there is potentially a cost for each goal you create. It’s an invisible cost. It’s paid by the user in the currency of JavaScript sent down the wire (I wish that the Google Analytics admin interface were more like the old interface for Google Fonts, where each extra file you added literally pushed a needle higher on a dial).

It strikes me that the event-based goals would necessarily require more JavaScript in order to listen out for those clicks and fire off that information. The destination-based goals should be able to get all the information needed from regular page navigations.

So I have a hypothesis. I think that destination-based goals are less harmful to performance than event-based goals. I might well be wrong about that, and if I am, please let me know.

With that hypothesis in mind, and until I learn otherwise, I’ve got two rules of thumb to offer when it comes to using Google Analytics:

  1. Try to keep the number of goals to a minimum.
  2. If you must create a goal, favour destinations over events.

Friday, January 12th, 2018

Third-Party Scripts | CSS-Tricks

Hell is other people’s JavaScript.

Third-party scripts are probably the #1 cause of poor performance and bad UX on the web.

Wednesday, January 3rd, 2018

A Browser You’ve Never Heard of Is Dethroning Google in Asia - WSJ

I’m always happy to see a thriving market of competition amongst browsers—we had a browser monopoly once before and it was a bad situation.

(That said, UC Browser has its own issues.)

Monday, January 1st, 2018

Why Web Developers Need to Care about Interactivity — Philip Walton

Just to be clear, this isn’t about interaction design, it’s about how browsers and become unresponsive to interaction when they’re trying to parse the truckloads of Javascript web developers throw at them.

Top tip: lay off the JavaScript. HTML is interactive instantly.

Sunday, December 3rd, 2017

Saturday, December 2nd, 2017

News | Voyager 1 Fires Up Thrusters After 37 Years

I want to build websites that perform this well.

On Tuesday, Nov. 28, 2017, Voyager engineers fired up the four TCM thrusters for the first time in 37 years and tested their ability to orient the spacecraft using 10-millisecond pulses. The team waited eagerly as the test results traveled through space, taking 19 hours and 35 minutes to reach an antenna in Goldstone, California, that is part of NASA’s Deep Space Network.

Lo and behold, on Wednesday, Nov. 29, they learned the TCM thrusters worked perfectly — and just as well as the attitude control thrusters.

Wednesday, November 29th, 2017

edent/SuperTinyIcons: Under 1KB each! Super Tiny Icons are miniscule SVG versions of your favourite website and app logos

These are lovely little SVGs of website logos that are yours for the taking. And if you want to contribute an icon to the collection, go for it …as long as it’s less than 1024 bytes (most of these are waaay less).

Monday, November 27th, 2017

Network based image loading using the Network Information API in Service Worker | justmarkup

This is clever—you can use the navigator.connection API from a service worker (because it’s asynchronous) which means you can have a service worker script that serves differently sized images based on bandwidth.

The Fallacies of Distributed Computing (Applied to Front-End Performance) – CSS Wizardry – CSS Architecture, Web Performance Optimisation, and more, by Harry Roberts

Harry cautions against making assumptions about the network when it comes to front-end development:

Yet time and time again I see developers falling into the same old traps—making assumptions or overly-optimistic predictions about the conditions in which their apps will run.

Planning for the worst-case scenario is never a wasted effort:

If you build and structure applications such that they survive adverse conditions, then they will thrive in favourable ones.

Saturday, November 18th, 2017

Using SVG as placeholders — More Image Loading Techniques - JMPerez Blog

Here’s a clever to technique to improve the perceived performance of image loading with a polygonal SVG placeholder.

Thursday, November 9th, 2017

The Contrast Swap Technique: Improved Image Performance with CSS Filters | CSS-Tricks

A clever performance trick for images:

  1. Reduce image contrast using a linear transform function (Photoshop can do this)
  2. Apply a contrast filter in CSS to the image to make up for the contrast removal

Sunday, November 5th, 2017

Against an Increasingly User-Hostile Web - Neustadt.fr

With echoes of Anil Dash’s The Web We Lost, this essay is a timely reminder—with practical advice—for we designers and developers who are making the web …and betraying its users.

You see, the web wasn’t meant to be a gated community. It’s actually pretty simple.

A web server, a public address and an HTML file are all that you need to share your thoughts (or indeed, art, sound or software) with anyone in the world. No authority from which to seek approval, no editorial board, no publisher. No content policy, no dependence on a third party startup that might fold in three years to begin a new adventure.

That’s what the web makes possible. It’s friendship over hyperlink, knowledge over the network, romance over HTTP.

Thursday, November 2nd, 2017

The dConstruct Audio Archive works offline

The dConstruct conference is as old as Clearleft itself. We put on the first event back in 2005, the year of our founding. The last dConstruct was in 2015. It had a good run.

I’m really proud of the three years I ran the show—2012, 2013, and 2014—and I have great memories from each event. I’m inordinately pleased that the individual websites are still online after all these years. I’m equally pleased with the dConstruct audio archive that we put online in 2012. Now that the event itself is no longer running, it truly is an archive—a treasury of voices from the past.

I think that these kinds of online archives are eminently suitable for some offline design. So I’ve added a service worker script to the dConstruct archive.


To start with, there’s the no-brainer: as soon as someone hits the website, pre-cache static assets like CSS, JavaScript, the logo, and icon images. Now subsequent page loads will be quicker—those assets are taken straight from the cache.

But what about the individual pages? For something like Resilient Web Design—another site that won’t be updated—I pre-cache everything. I could do that with the dConstruct archive. All of the pages with all of the images add up to less than two megabytes; the entire site weighs less than a single page on Wired.com or The Verge.

In the end, I decided to go with a cache-as-you-go strategy. Every time a page or an image is fetched from the network, it is immediately put in a cache. The next time that page or image is requested, the file is served from that cache instead of the network.

Here’s the logic for fetch requests:

  1. First, look to see if the file is in a cache. If it is, great! Serve that.
  2. If the file isn’t in a cache, make a network request and serve the response …but put a copy of a file in the cache.
  3. The next time that file is requested, go to step one.

Save for offline

That caching strategy works great for pages, images, and other assets. But there’s one kind of file on the dConstruct archive that’s a bit different: the audio files. They can be fairly big, so I don’t want to cache those unless the user specifically requests it.

If you end up on the page for a particular talk, and your browser supports service workers, you’ll get an additional UI element in the list of options: a toggle to “save offline” (under the hood, it’s a checkbox). If you activate that option, then the audio file gets put into a cache.

Now if you lose your network connection while browsing the site, you’ll get a custom offline page with the option to listen to any audio files you saved for offline listening. You’ll also see this collection of talks on the homepage, regardless of whether you’ve got an internet connection or not.

So if you’ve got a long plane journey ahead of you, have a browse around the dConstruct archive and select some talks for your offline listening pleasure.

Or just enjoy the speediness of browsing the site.

Turning another website into a Progressive Web App.

Wednesday, November 1st, 2017

Rebuilding slack.com – Several People Are Coding

A really great case study of a code refactor by Mina, with particular emphasis on the benefits of CSS Grid, fluid typography, and accessibility.

Sunday, October 29th, 2017

Can You Afford It?: Real-world Web Performance Budgets – Infrequently Noted

Alex looks at the mindset and approaches you need to adopt to make a performant site. There’s some great advice in here for setting performance budgets for JavaScript.

JavaScript is the single most expensive part of any page in ways that are a function of both network capacity and device speed. For developers and decision makers with fast phones on fast networks this is a double-whammy of hidden costs.

Friday, October 27th, 2017

Transpiled for-of Loops are Bad for the Client - daverupert.com

This story is just a personal reminder for me to repeatedly question what our tools spit out. I don’t want to be the neophobe in the room but I sometimes wonder if we’re living in a collective delusion that the current toolchain is great when it’s really just morbidly complex. More JavaScript to fix JavaScript concerns the hell out of me.

Yes! Even if you’re not interested in the details of Dave’s story of JavaScript optimisation, be sure to read his conclusion.

I am responsible for the code that goes into the machine, I do not want to shirk the responsibility of what comes out. Blind faith in tools to fix our problems is a risky choice. Maybe “risky” is the wrong word, but it certainly seems that we move the cost of our compromises to the client and we, speaking from personal experience, rarely inspect the results.