Tags: speed

3

sparkline

On The Verge

Quite a few people have been linking to an article on The Verge with the inflammatory title The Mobile web sucks. In it, Nilay Patel heaps blame upon mobile browsers, Safari in particular:

But man, the web browsers on phones are terrible. They are an abomination of bad user experience, poor performance, and overall disdain for the open web that kicked off the modern tech revolution.

Les Orchard says what we’re all thinking in his detailed response The Verge’s web sucks:

Calling out browser makers for the performance of sites like his? That’s a bit much.

Nilay does acknowledge that the Verge could do better:

Now, I happen to work at a media company, and I happen to run a website that can be bloated and slow. Some of this is our fault: The Verge is ultra-complicated, we have huge images, and we serve ads from our own direct sales and a variety of programmatic networks.

But still, it sounds like the buck is being passed along. The performance issues are being treated as Somebody Else’s Problem …ad networks, trackers, etc.

The developers at Vox Media take a different, and in my opinion, more correct view. They’re declaring performance bankruptcy:

I mean, let’s cut to the chase here… our sites are friggin’ slow, okay!

But I worry about how they can possibly reconcile their desire for a faster website with a culture that accepts enormously bloated ads and trackers as the inevitable price of doing business on the web:

I’m hearing an awful lot of false dichotomies here: either you can have a performant website or you have a business model based on advertising. Here’s another false dichotomy:

If the message coming down from above is that performance concerns and business concerns are fundamentally at odds, then I just don’t know how the developers are ever going to create a culture of performance (which is a real shame, because they sound like a great bunch). It’s a particularly bizarre false dichotomy to be foisting when you consider that all the evidence points to performance as being a key differentiator when it comes to making moolah.

It’s funny, but I take almost the opposite view that Nilay puts forth in his original article. Instead of thinking “Oh, why won’t these awful browsers improve to be better at delivering our websites?”, I tend to think “Oh, why won’t these awful websites improve to be better at taking advantage of our browsers?” After all, it doesn’t seem like that long ago that web browsers on mobile really were awful; incapable of rendering the “real” web, instead only able to deal with WAP.

As Maciej says in his magnificent presentation Web Design: The First 100 Years:

As soon as a system shows signs of performance, developers will add enough abstraction to make it borderline unusable. Software forever remains at the limits of what people will put up with. Developers and designers together create overweight systems in hopes that the hardware will catch up in time and cover their mistakes.

We complained for years that browsers couldn’t do layout and javascript consistently. As soon as that got fixed, we got busy writing libraries that reimplemented the browser within itself, only slower.

I fear that if Nilay got his wish and mobile browsers made a quantum leap in performance tomorrow, the result would be even more bloated JavaScript for even more ads and trackers on websites like The Verge.

If anything, browser makers might have to take more drastic steps to route around the damage of bloated websites with invasive tracking.

We’ve been here before. When JavaScript first landed in web browsers, it was quickly adopted for three primary use cases:

  1. swapping out images when the user moused over a link,
  2. doing really bad client-side form validation, and
  3. spawning pop-up windows.

The first use case was so popular, it was moved from a procedural language (JavaScript) to a declarative language (CSS). The second use case is still with us today. The third use case was solved by browsers. They added a preference to block unwanted pop-ups.

Tracking and advertising scripts are today’s equivalent of pop-up windows. There are already plenty of tools out there to route around their damage: Ghostery, Adblock Plus, etc., along with tools like Instapaper, Readability, and Pocket.

I’m sure that business owners felt the same way about pop-up ads back in the late ’90s. Just the price of doing business. Shrug shoulders. Just the way things are. Nothing we can do to change that.

For such a young, supposedly-innovative industry, I’m often amazed at what people choose to treat as immovable, unchangeable, carved-in-stone issues. Bloated, invasive ad tracking isn’t a law of nature. It’s a choice. We can choose to change.

Every bloated advertising and tracking script on a website was added by a person. What if that person refused? I guess that person would be fired and another person would be told to add the script. What if that person refused? What if we had a web developer picket line that we collectively refused to cross?

That’s an unrealistic, drastic suggestion. But the way that the web is being destroyed by our collective culpability calls for drastic measures.

By the way, the pop-up ad was first created by Ethan Zuckerman. He has since apologised. What will you be apologising for in decades to come?

Instantiation

When I give talks or workshops, I sometimes get a bit ranty. One of the richest seams of rantiness comes from me complaining about how we web designers and developers are responsible for making the web a hostile place. “Stop getting the web wrong!” I might shout, like an old man yelling at a cloud. I point to services like Instapaper and Readability and describe their existence as a damning indictment of our work.

Don’t get me wrong—I really like Instapaper, Readability, RSS readers, or any other tools that allow people to read what they want when they want it. But think about their fundamental selling point: get to the content you want without having to wade through the cruft. That cruft was put there by us.

So-called modern web design and development is damage that people have to route around.

(Ooh, I can feel myself coming over all ranty and angry again! Calm down, Jeremy, calm down!)

And. Breathe.

Now there’s a new tool to the add to the list: Facebook Instant. Again, I think it’s actually pretty great that this service exists. But once again, it should make us ashamed of the work we’re collectively producing.

In this case, the service is—somewhat ironically—explicitly touting the performance benefits of not going to a website to read an article. Quite right.

PPK points to tools as the source of the problem and Marco Arment agrees:

The entire culture dominant among web developers today is bizarrely framework-heavy, with seemingly no thought given to minimizing dependencies and page weight.

But I think it’s a bit more subtle than that. As John Gruber says:

Business development deals have created problems that no web developer can solve. There’s no way to make a web page with a full-screen content-obscuring ad anything other than a shitty experience.

Now you might be saying to yourself “Well, I’ve never made a bloated web page!” or “I’ve never slapped loads of intrusive crap over the content!” I’d certainly like to think that I can look at my track record and hold my head up reasonably high. But that doesn’t matter. If the overall perception is that going to a URL to read an article is a pain in the ass, it hurts all of us.

Take this article from M.G. Siegler:

Not only is the web not fast enough for apps, it’s not fast enough for text either. …on mobile, the web browser just isn’t cutting it. … Native apps provide a better user experience on mobile than a web browser.

On the face of it, this is kind of a bizarre claim. After all, there’s nothing inherent in web browsers that makes them slow at rendering text—quite the opposite! And native apps still use HTTP (and often HTML) to fetch content; the network doesn’t suddenly get magically faster just because the piece of software requesting a resource doesn’t happen to be a web browser.

But this conflation of slow websites and slow web browsers is perfectly understandable. If it looks like a slow duck, and it quacks like a slow duck, then why not conclude that ducks are slow? Even if we know that there’s nothing inherently slow about making web pages:

My hope is that Facebook Instant will shake things up a bit. M.G. Siegler again:

At the very least, Facebook has put everyone else on notice. Your content better load fast or you’re screwed. Publication websites have become an absolutely bloated mess. They range from beautiful (The Verge) to atrocious (Bloomberg) to unusable (Forbes). The common denominator: they’re all way too slow.

There needs to be a cultural change in how we approach building for the web. Yes, some of the tools we choose are part of the problem, but the bigger problem is that performance still isn’t being recognised as the most important factor in how people feel about websites (and by extension, the web). This isn’t just a developer issue. It’s a design issue. It’s a UX issue. It’s a business issue. Performance is everybody’s collective responsibility.

I’d better stop now before I start getting all ranty again.

I’ll leave you with some other writings on this topic…

Tim Kadlec talks about choosing performance:

It’s not because of any sort of technical limitations. No, if a website is slow it’s because performance was not prioritized. It’s because when push came to shove, time and resources were spent on other features of a site and not on making sure that site loads quickly.

Jim Ray points out that “we learned the wrong lesson from the rise of mobile and the app ecosystem”:

We’ve spent far too long trying to compete with native experiences by making our websites look and behave like apps. This includes not just thousands of lines of JavaScript to mimic native app swipes and scrolling but even the lower overhead aesthetics of fixed position headers and persistent navigation.

(*cough*Flipboard*cough*)

Finally, Baldur Bjarnason has written a terrific piece:

The web doesn’t suck. Your websites suck.

All of your websites suck.

You destroy basic usability by hijacking the scrollbar. You take native functionality (scrolling, selection, links, loading) that is fast and efficient and you rewrite it with ‘cutting edge’ javascript toolkits and frameworks so that it is slow and buggy and broken. You balloon your websites with megabytes of cruft. You ignore best practices. You take something that works and is complementary to your business and turn it into a liability.

The lousy performance of your websites becomes a defensive moat around Facebook.

Go read the whole thing—it’s terrific:

This is a long-standing debate. Except it’s only long-standing among web developers. Columnists, managers, pundits, and journalists seem to have no interest in understanding the technical foundation of their livelihoods. Instead they are content with assuming that Facebook can somehow magically render HTML over HTTP faster than anybody else and there is nothing anybody can do to make their crap scroll-jacking websites faster. They buy into the myth that the web is incapable of delivering on its core capabilities: delivering hypertext and images quickly to a diverse and connected readership.

Inlining critical CSS for first-time visits

After listening to Scott rave on about how much of a perceived-performance benefit he got from inlining critical CSS on first load, I thought I’d give it a shot over at The Session. On the chance that this might be useful for others, I figured I’d document what I did.

The idea here is that you can give a massive boost to the perceived performance of the first page load on a site by putting the most important CSS in the head of the page. Then you cache the full stylesheet. For subsequent visits you only ever use the external stylesheet. So if you’re squeamish at the thought of munging your CSS into your HTML (and that’s a perfectly reasonable reaction), don’t worry—this is a temporary workaround just for initial visits.

My particular technology stack here is using Grunt, Apache, and PHP with Twig templates. But I’m sure you can adapt this for other technology stacks: what’s important here isn’t the technology, it’s the thinking behind it. And anyway, the end user never sees any of those technologies: the end user gets HTML, CSS, and JavaScript. As long as that’s what you’re outputting, the specifics of the technology stack really don’t matter.

Generating the critical CSS

Okay. First question: how do you figure out which CSS is critical and which CSS can be deferred?

To help answer that, and automate the task of generating the critical CSS, Filament Group have made a Grunt task called grunt-criticalcss. I added that to my project and updated my Gruntfile accordingly:

grunt.initConfig({
    // All my existing Grunt configuration goes here.
    criticalcss: {
        dist: {
            options: {
                url: 'http://thesession.dev',
                width: 1024,
                height: 800,
                filename: '/path/to/main.css',
                outputfile: '/path/to/critical.css'
            }
        }
    }
});

I’m giving it the name of my locally-hosted version of the site and some parameters to judge which CSS to prioritise. Those parameters are viewport width and height. Now, that’s not a perfect way of judging which CSS matters most, but it’ll do.

Then I add it to the list of Grunt tasks:

// All my existing Grunt tasks go here.
grunt.loadNpmTasks('grunt-criticalcss');

grunt.registerTask('default', ['sass', etc., 'criticalcss']);

The end result is that I’ve got two CSS files: the full stylesheet (called something like main.css) and a stylesheet that only contains the critical styles (called critical.css).

Cache-busting CSS

Okay, this is a bit of a tangent but trust me, it’s going to be relevant…

Most of the time it’s a very good thing that browsers cache external CSS files. But if you’ve made a change to that CSS file, then that feature becomes a bug: you need some way of telling the browser that the CSS file has been updated. The simplest way to do this is to change the name of the file so that the browser sees it as a whole new asset to be cached.

You could use query strings to do this cache-busting but that has some issues. I use a little bit of Apache rewriting to get a similar effect. I point browsers to CSS files like this:

<link rel="stylesheet" href="/css/main.20150310.css">

Now, there isn’t actually a file named main.20150310.css, it’s just called main.css. To tell the server where the actual file is, I use this rewrite rule:

RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.+).(d+).(js|css)$ $1.$3 [L]

That tells the server to ignore those numbers in JavaScript and CSS file names, but the browser will still interpret it as a new file whenever I update that number. You can do that in a .htaccess file or directly in the Apache configuration.

Right. With that little detour out of the way, let’s get back to the issue of inlining critical CSS.

Differentiating repeat visits

That number that I’m putting into the filenames of my CSS is something I update in my Twig template, like this (although this is really something that a Grunt task could do, I guess):

{% set cssupdate = '20150310' %}

Then I can use it like this:

<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">

I can also use JavaScript to store that number in a cookie called csscached so I’ll know if the user has a cached version of this revision of the stylesheet:

<script>
document.cookie = 'csscached={{ cssupdate }};expires="Tue, 19 Jan 2038 03:14:07 GMT";path=/';
</script>

The absence or presence of that cookie is going to be what determines whether the user gets inlined critical CSS (a first-time visitor, or a visitor with an out-of-date cached stylesheet) or whether the user gets a good ol’ fashioned external stylesheet (a repeat visitor with an up-to-date version of the stylesheet in their cache).

Here are the steps I’m going through:

First of all, set the Twig cssupdate variable to the last revision of the CSS:

{% set cssupdate = '20150310' %}

Next, check to see if there’s a cookie called csscached that matches the value of the latest revision. If there is, great! This is a repeat visitor with an up-to-date cache. Give ‘em the external stylesheet:

{% if _cookie.csscached == cssupdate %}
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">

If not, then dump the critical CSS straight into the head of the document:

{% else %}
<style>
{% include '/css/critical.css' %}
</style>

Now I still want to load the full stylesheet but I don’t want it to be a blocking request. I can do this using JavaScript. Once again it’s Filament Group to the rescue with their loadCSS script:

 <script>
    // include loadCSS here...
    loadCSS('/css/main.{{ cssupdate }}.css');

While I’m at it, I store the value of cssupdate in the csscached cookie:

    document.cookie = 'csscached={{ cssupdate }};expires="Tue, 19 Jan 2038 03:14:07 GMT";path=/';
</script>

Finally, consider the possibility that JavaScript isn’t available and link to the full CSS file inside a noscript element:

<noscript>
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">
</noscript>
{% endif %}

And we’re done. Phew!

Here’s how it looks all together in my Twig template:

{% set cssupdate = '20150310' %}
{% if _cookie.csscached == cssupdate %}
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">
{% else %}
<style>
{% include '/css/critical.css' %}
</style>
<script>
// include loadCSS here...
loadCSS('/css/main.{{ cssupdate }}.css');
document.cookie = 'csscached={{ cssupdate }};expires="Tue, 19 Jan 2038 03:14:07 GMT";path=/';
</script>
<noscript>
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">
</noscript>
{% endif %}

You can see the production code from The Session in this gist. I’ve tweaked the loadCSS script slightly to match my preferred JavaScript style but otherwise, it’s doing exactly what I’ve outlined here.

The result

According to Google’s PageSpeed Insights, I done good.

Optimising https://thesession.org/