Tags: performance

14

sparkline

Less JavaScript

Every front-end developer at Clearleft went to FFConf last Friday: me, Mark, Graham, Charlotte, and Danielle. We weren’t about to pass up the opportunity to attend a world-class dev conference right here in our home base of Brighton.

The day was unsurprisingly excellent. All the speakers brought their A-game on a wide range of topics. Of course JavaScript was covered, but there was also plenty of mindfood on CSS, accessibility, progressive enhancement, dev tools, creative coding, and even emoji.

Normally FFConf would be a good opportunity to catch up with some Pauls from the Google devrel team, but because of an unfortunate scheduling clash this year, all the Pauls were at Chrome Dev Summit 2016 on the other side of the Atlantic.

I’ve been catching up on the videos from the event. There’s plenty of tech-related stuff: dev tools, web components, and plenty of talk about progressive web apps. But there was also a very, very heavy focus on performance. I don’t just mean performance at the shallow scale of file size and optimisation, but a genuine questioning of the impact of our developer workflows and tools.

In his talk on service workers (what else?), Jake makes the point that not everything needs to be a single page app, echoing Ada’s talk at FFConf.

He makes the point that if you really want fast rendering, nothing on the client side quite beats a server render.

They’ve written a lot of JavaScript to make this quite slow.

Unfortunately, all too often, I hear people say that a progressive web app must be a single page app. And I am not so sure. You might not need a single page app. A single page app can end up being a lot of work and slower. There’s a lot of cargo-culting around single page apps.

Alex followed up his barnstorming talk from the Polymer Summit with some more uncomfortable truths about how mobile phones work.

Cell networks are basically kryptonite to the protocols and assumptions that the web was built on.

And JavaScript frameworks aren’t helping. Quite the opposite.

But make no mistake: if you’re using one of today’s more popular JavaScript frameworks in the most naive way, you are failing by default. There is no sugarcoating this.

Today’s frameworks are mostly a sign of ignorance, or privilege, or both. The good news is that we can fix the ignorance.

Assumptions

Last year Benedict Evans wrote about the worldwide proliferation and growth of smartphones. Nolan referenced that post when he extrapolated the kind of experience people will be having:

As Benedict Evans has noted, the next billion people who are poised to come online will be using the internet almost exclusively through smartphones. And if Google’s plans with Android One are any indication, then we have a fairly good idea of what kind of devices the “next billion” will be using:

  • They’ll mostly be running Android.
  • They’ll have decent specs (1GB RAM, quad-core processors).
  • They’ll have an evergreen browser and WebView (Android 5+).
  • What they won’t have, however, is a reliable internet connection.

This is the same argument that Tom made in his presentation at Responsive Field Day. The main point is that network conditions are unreliable, and I absolutely agree that we need to be very, very mindful of that. But I’m not so sure about the other conditions either. They smell like assumptions:

Assumptions are the problem. Whether it’s assumptions about screen size, assumptions about being able-bodied, assumptions about network connectivity, or assumptions about browser capabilities, I don’t think any assumptions are a safe bet. Now you might quite reasonably say that we have to make some assumptions when we’re building on the web, and you’d be right. But I think we should still aim to keep them to a minimum.

It’s not necessarily true that all those new web users will be running WebView browser like Chrome—there are millions of Opera Mini users, and I would expect that number to rise, given all the speed and cost benefits that proxy browsing brings.

I also don’t think that just because a device is a smartphone it necessarily means that it’s a pocket supercomputer. It might seem like a reasonable assumption to make, given the specs of even a low-end smartphone, but the specs don’t tell the whole story.

Alex gave a great presentation at the recent Polymer Summit. He dives deep into exactly how smartphones at the lower end of the market deal with websites.

I don’t normally enjoy listening to talk of hardware and specs, but Alex makes the topic very compelling by tying it directly to how we build websites. In short, we’re using waaaaay too much JavaScript. The message here is not “don’t use JavaScript” but rather “use JavaScript wisely.” Alas, many of the current crop of monolithic frameworks aren’t well suited to this.

Alex’s talk prompted Michael Scharnagl to take a look back at past assumptions and lessons learned on the web, from responsive design to progressive web apps.

We are consistently improving and we often have to realize that our assumptions are wrong.

This is particularly true when we’re making assumptions about how people will access the web.

It’s not enough to talk about the “next billion” in abstract, like an opportunity to reach teeming masses of people ripe for monetization. We need to understand their lives and their priorities with the sort of detail that can build empathy for other people living under vastly different circumstances.

That’s from an article Ethan linked to, noting:

Enhance! Conf!

Two weeks from now there will be an event in London. You should go to it. It’s called EnhanceConf:

EnhanceConf is a one day, single track conference covering the state of the art in progressive enhancement. We will look at the tools and techniques that allow you to extend the reach of your website/application without incurring additional costs.

As you can probably guess, this is right up my alley. Wild horses wouldn’t keep me away from it. I’ve been asked to be Master of Ceremonies for the day, which is a great honour. Luckily I have some experience in that department from three years of hosting Responsive Day Out. In fact, EnhanceConf is going to run very much in the mold of Responsive Day Out, as organiser Simon explained in an interview with Aaron.

But the reason to attend is of course the content. Check out that line-up! Now that is going to be a knowledge-packed day: design, development, accessibility, performance …these are a few of my favourite things. Nat Buckley, Jen Simmons, Phil Hawksworth, Anna Debenham, Aaron Gustafson …these are a few of my favourite people.

Tickets are still available. Use the discount code JEREMYK to get a whopping 15% off the ticket price.

There’s also a scholarship:

The scholarships are available to anyone not normally able to attend a conference.

I’m really looking forward to EnhanceConf. See you at RSA House on March 4th!

AMPed up

Apple has Apple News. Facebook has Instant Articles. Now Google has AMP: Accelerated Mobile Pages.

The big players sure are going to a lot of effort to reinvent RSS.

That may sound like a flippant remark, but it’s not too far from the truth. In the case of Apple News, its current incarnation appears to be quite literally an RSS reader, at least until the unveiling of the forthcoming Apple News Format.

Google’s AMP project looks a little bit different to the offerings from Facebook and Apple. Rather than creating a proprietary format from scratch, it mandates a subset of HTML …with some proprietary elements thrown in (or, to use the more diplomatic parlance of the extensible web, custom elements).

The idea is that alongside the regular HTML version of your document, you provide a corresponding AMP HTML version. Because the AMP HTML version will be leaner and meaner, user agents can then grab the AMP HTML version and present that to the end user for a faster browsing experience.

So if an RSS feed is an alternate representation of a homepage or a listing of articles, then an AMP document is an alternate representation of a single article.

Now, my own personal take on providing alternate representations of documents is “Sure. Why not?” Here on adactio.com I provide RSS feeds. On The Session I provide RSS, JSON, and XML. And on Huffduffer I provide RSS, Atom, JSON, and XSPF, adding:

If you would like to see another format supported, share your idea.

Also, each individual item on Huffduffer has a corresponding oEmbed version (and, in theory, an RDF version)—an alternate representation of that item …in principle, not that different from AMP. The big difference with AMP is that it’s using HTML (of sorts) for its format.

All of this sounds pretty reasonable: provide an alternate representation of your canonical HTML pages so that user-agents (Twitter, Google, browsers) can render a faster-loading version …much like an RSS reader.

So should you start providing AMP versions of your pages? My initial reaction is “Sure. Why not?”

The AMP Project website comes with a list of frequently asked questions, which of course, nobody has asked. My own list of invented frequently asked questions might look a little different.

Will this kill advertising?

We live in hope.

Alas, AMP pages will still be able to carry advertising, but in a restricted form. No more scripts that track your movement across the web …unless the script is from an authorised provider, like say, Google.

But it looks like the worst performance offenders won’t be able to get their grubby little scripts into AMP pages. This is a good thing.

Won’t this kill journalism?

Of all the horrid myths currently in circulation, the two that piss me off the most are:

  1. Journalism requires advertising to survive.
  2. Advertising requires invasive JavaScript.

Put the two together and you get the gist of most of the chicken-littling articles currently in circulation: “Journalism requires invasive JavaScript to survive.”

I could argue against the first claim, but let’s leave that for another day. Let’s suppose for now that, sure, journalism requires advertising to survive. Fine.

It’s that second point that is fundamentally wrong. The idea that the current state of advertising is the only way of advertising is incredibly short-sighted and misguided. Invasive JavaScript is not a requirement for showing me an ad. Setting a cookie is not a requirement for showing me an ad. Knowing where I live, who my friends are, what my income level is, and where I’ve been on the web …none of these are requirements for showing me an ad.

It is entirely possible to advertise to me and treat me with respect at the same time. The Deck already does this.

And you know what? Ad networks had their chance. They had their chance to treat us with respect with the Do Not Track initiative. We asked them to respect our wishes. They told us get screwed.

Now those same ad providers are crying because we’re installing ad blockers. They can get screwed.

Anyway.

It is entirely possible to advertise within AMP pages …just not using blocking JavaScript.

For a nicely nuanced take on what AMP could mean for journalism, see Joshua Benton’s article on Nieman Lab—Get AMP’d: Here’s what publishers need to know about Google’s new plan to speed up your website.

Why not just make faster web pages?

Excellent question!

For a site like adactio.com, the difference between the regular HTML version of an article and the corresponding AMP version of the same article is pretty small. It’s a shame that I can’t just say “Hey, the current version of the article is the AMP version”, but that would require that I only use a subset of HTML and that I add some required guff to my page (including an unnecessary JavaScript file).

But for most of the news sites out there, the difference between their regular HTML pages and the corresponding AMP versions will be pretty significant. That’s because the regular HTML versions are bloated with third-party scripts, oversized assets, and cruft around the actual content.

Now it is in theory possible for these news sites to get rid of all those things, and I sincerely hope that they will. But that’s a big political struggle. I am rooting for developers—like the good folks at VOX—who have to battle against bosses who honestly think that journalism requires invasive JavaScript. Best of luck.

Along comes Google saying “If you want to play in our sandbox, you’re going to have to abide by our rules.” Those rules include performance best practices (for the most part—I take issue with some of the requirements, and I’ll go into that in more detail in a moment).

Now when the boss says “Slap a three megabyte JavaScript library on it so we can show a carousel”, the developers can only respond with “Google says No.”

When the boss says “Slap a ton of third-party trackers on it so we can monetise those eyeballs”, the developers can only respond with “Google says No.”

Google have used their influence like this before and it has brought them accusations of monopolistic abuse. Some people got very upset when they began labelling (and later ranking) mobile-friendly pages. Personally, I’ve got no issue with that.

In this particular case, Google aren’t mandating what you can and can’t do on your regular HTML pages; only what you can and can’t do on the corresponding AMP page.

Which brings up another question…

Will the AMP web kill the open web?

If we all start creating AMP versions of our pages, and those pages are faster than our regular HTML versions, won’t everyone just see the AMP versions without ever seeing the “full” versions?

Tim articulates a legitimate concern:

This promise of improved distribution for pages using AMP HTML shifts the incentive. AMP isn’t encouraging better performance on the web; AMP is encouraging the use of their specific tool to build a version of a web page. It doesn’t feel like something helping the open web so much as it feels like something bringing a little bit of the walled garden mentality of native development onto the web.

That troubles me. Using a very specific tool to build a tailored version of my page in order to “reach everyone” doesn’t fit any definition of the “open web” that I’ve ever heard.

Fair point. But I also remember that a lot of people were upset by RSS. They didn’t like that users could go for months at a time without visiting the actual website, and yet they were reading every article. They were reading every article in non-browser user agents in a format that wasn’t HTML. On paper that sounds like the antithesis of the open web, but in practice there was always something very webby about RSS, and RSS feed readers—it put the power back in the hands of the end users.

Some people chose not to play ball. They only put snippets in their RSS feeds, not the full articles. Maybe some publishers will do the same with the AMP versions of their articles: “To read more, click here…”

But I remember what generally tended to happen to the publishers who refused to put the full content in their RSS feeds. We unsubscribed.

Still, I share the concern that any one company—whether it’s Facebook, Apple, or Google—should wield so much power over how we publish on the web. I don’t think you have to be a conspiracy theorist to view the AMP project as an attempt to replace the existing web with an alternate web, more tightly controlled by Google (albeit a faster, more performant, tightly-controlled web).

My hope is that the current will flow in both directions. As well as publishers creating AMP versions of their pages in order to appease Google, perhaps they will start to ask “Why can’t our regular pages be this fast?” By showing that there is life beyond big bloated invasive web pages, perhaps the AMP project will work as a demo of what the whole web could be.

I’ve been playing around with the AMP HTML spec. It has some issues. The good news is that it’s open source and the project owners seem receptive to feedback.

JavaScript

No external JavaScript is allowed in an AMP HTML document. This covers third-party libraries, advertising and tracking scripts. This is A-okay with me.

The reasons given for this ban are related to performance and I agree with them completely. Big bloated JavaScript libraries are one of the biggest performance killers on the web. I’m happy to leave them at the door (although weirdly, web fonts—another big performance killer—are allowed in).

But then there’s a bit of an about-face. In order to have a valid AMP HTML page, you must include a piece of third-party JavaScript. In this case, the third party is Google and the JavaScript file is what handles the loading of assets.

This seems a bit strange to me; on the one hand claiming that third-party JavaScript is bad for performance and on the other, requiring some third-party JavaScript. As Justin says:

For me this is loading one thing too many… the AMP JS library. Surely the document itself is going to be faster than loading a library to try and make it load faster.

On the plus side, this third-party JavaScript is loaded asynchronously. It seems to mostly be there to handle the rendering of embedded content: images, videos, audio, etc.

Embedded content

If you want audio, video, or images on your page, you must use propriet… custom elements like amp-audio, amp-video, and amp-img. In the case of images, I can see how this is a way of getting around the browser’s lookahead pre-parser (although responsive images also solve this problem). In the case of audio and video, the standard audio and video elements already come with a way of specifying preloading behaviour using the preload attribute. Very odd.

Justin again:

I’m not sure if this is solving anything at the moment that we’re not already fixing with something like responsive images.

To use amp-img for images within the flow of a document, you’ll need to specify the dimensions of the image. This makes sense from a rendering point of view—knowing the width and height ahead of time avoids repaints and reflows. Alas, in many of the cases here on adactio.com, I don’t know the dimensions of the images I’m including. So any of my AMP HTML pages that include images will be invalid.

Overall, the way that AMP HTML handles embedded content looks like a whole lot of wheel reinvention. I like the idea of providing custom elements as an option for authors. I hate the idea of making them a requirement.

Metadata

If you want to provide metadata about your document, AMP HTML currently requires the use of Google’s Schema.org vocabulary. This has a big whiff of vendor lock-in to it. I’ve flagged this up as an issue and Aaron is pushing a change so hopefully this will be resolved soon.

Accessibility

In its initial release, the AMP HTML spec came with some nasty surprises for accessibility. The biggest is probably the requirement to include this in your viewport meta element:

maximum-scale=1,user-scalable=no

Yowzers! That’s some slap in the face to decent web developers everywhere. Fortunately this has been flagged up and I’m hoping it will be fixed soon.

If it doesn’t get fixed, it’s quite a non-starter. It beggars belief that Google would mandate to authors that they must make their pages inaccessible to pinch/zoom. I would hope that many developers would rebel against such a draconian injunction. If that happens, it’ll be interesting to see what becomes of those theoretically badly-formed AMP HTML documents. Technically, they will fail validation, but for very good reason. Will those accessible documents be rejected?

Please get involved on this issue if this is important to you (hint: this should be important to you).

There are a few smaller issues. Initially the :focus pseudo-class was disallowed in author CSS, but that’s being fixed.

Currently AMP HTML documents must have this line:

<style>body {opacity: 0}</style><noscript><style>body {opacity: 1}</style></noscript>

shudders

That’s a horrible conflation of JavaScript availability and CSS. It’s being fixed though, and soon all the opacity jiggery-pokery will only happen via JavaScript, which will be a big improvement: it should either all happen in CSS or all happen in JavaScript, but not the current mixture of the two.

Discovery

The AMP HTML version of your page is not the canonical version. You can specify where the real HTML version of your document is by using rel="canonical". Great!

But how do you link from your canonical page out to the AMP HTML version? Currently you’re supposed to use rel="amphtml". No, they haven’t checked the registry. Again. I’ll go in and add it.

In the meantime, I’m also requesting that the amphtml value can be combined with the alternate value, seeing as rel values can be space separated:

rel="alternate amphtml" type="text/html"

See? Not that different to RSS:

rel="alterate" type="application/rss+xml"

POSSE

When I publish something on adactio.com in HTML, it already gets syndicated to different places. This is the Indie Web idea of POSSE: Publish (on your) Own Site, Syndicate Elsewhere. As well as providing RSS feeds, I’ve also got Twitter bots that syndicate to Twitter. An If This, Then That script pushes posts to Facebook. And if I publish a photo, it goes to Flickr. Now that Medium is finally providing a publishing API, I’ll probably start syndicating articles there as well. The more, the merrier.

From that perspective, providing AMP HTML pages feels like just one more syndication option. If it were the only option, and I felt compelled to provide AMP versions of my content, I’d be very concerned. But for now, I’ll give it a whirl and see how it goes.

Here’s a bit of PHP I’m using to convert a regular piece of HTML into AMP HTML—it’s horrible code; it uses regular expressions on HTML which, as we all know, will summon the Elder Gods.

Building the dConstruct 2015 site

I remember when I first saw Paddy’s illustration for this year’s dConstruct site, I thought “Well, that’s a design direction, but there’s no way that Graham will be able to implement all of it.” There was a tight deadline for getting the site out, and let’s face it, there was so much going on in the design that we’d just have to prioritise.

I underestimated Graham’s sheer bloody-mindedness.

At the next front-end pow-wow at Clearleft, Graham showed the dConstruct site in all its glory …in Lynx.

http://2015.dconstruct.org in Lynx.

I love that. Even with the focus on the gorgeous illustration and futuristic atmosphere of the design, Graham took the time to think about the absolute basics: marking up the content in a logical structured way. Everything after that—the imagery, the fonts, the skewed style—all of it was built on a solid foundation.

One site, two browsers.

It would’ve been easy to go crazy with the fonts and images, but Graham made sure to optimise everything to within an inch of its life. The biggest bottleneck comes from a third party provider—the map tiles and associated JavaScript …so that’s loaded in after the initial content is loaded. It turns out that the site build was a matter of prioritisation after all.

http://2015.dconstruct.org/

There’s plenty of CSS trickery going on: transforms, transitions, and opacity. But for the icing on the cake, Graham reached for canvas and programmed space elevator traffic with randomly seeded velocity and size.

Oh, and of course it’s all responsive.

So, putting that all together…

The dConstruct 2015 site is gorgeous, semantic, responsive, and performant. Conventional wisdom dictates that you have to choose, but this little site—built on a really tight schedule—shows otherwise.

On The Verge

Quite a few people have been linking to an article on The Verge with the inflammatory title The Mobile web sucks. In it, Nilay Patel heaps blame upon mobile browsers, Safari in particular:

But man, the web browsers on phones are terrible. They are an abomination of bad user experience, poor performance, and overall disdain for the open web that kicked off the modern tech revolution.

Les Orchard says what we’re all thinking in his detailed response The Verge’s web sucks:

Calling out browser makers for the performance of sites like his? That’s a bit much.

Nilay does acknowledge that the Verge could do better:

Now, I happen to work at a media company, and I happen to run a website that can be bloated and slow. Some of this is our fault: The Verge is ultra-complicated, we have huge images, and we serve ads from our own direct sales and a variety of programmatic networks.

But still, it sounds like the buck is being passed along. The performance issues are being treated as Somebody Else’s Problem …ad networks, trackers, etc.

The developers at Vox Media take a different, and in my opinion, more correct view. They’re declaring performance bankruptcy:

I mean, let’s cut to the chase here… our sites are friggin’ slow, okay!

But I worry about how they can possibly reconcile their desire for a faster website with a culture that accepts enormously bloated ads and trackers as the inevitable price of doing business on the web:

I’m hearing an awful lot of false dichotomies here: either you can have a performant website or you have a business model based on advertising. Here’s another false dichotomy:

If the message coming down from above is that performance concerns and business concerns are fundamentally at odds, then I just don’t know how the developers are ever going to create a culture of performance (which is a real shame, because they sound like a great bunch). It’s a particularly bizarre false dichotomy to be foisting when you consider that all the evidence points to performance as being a key differentiator when it comes to making moolah.

It’s funny, but I take almost the opposite view that Nilay puts forth in his original article. Instead of thinking “Oh, why won’t these awful browsers improve to be better at delivering our websites?”, I tend to think “Oh, why won’t these awful websites improve to be better at taking advantage of our browsers?” After all, it doesn’t seem like that long ago that web browsers on mobile really were awful; incapable of rendering the “real” web, instead only able to deal with WAP.

As Maciej says in his magnificent presentation Web Design: The First 100 Years:

As soon as a system shows signs of performance, developers will add enough abstraction to make it borderline unusable. Software forever remains at the limits of what people will put up with. Developers and designers together create overweight systems in hopes that the hardware will catch up in time and cover their mistakes.

We complained for years that browsers couldn’t do layout and javascript consistently. As soon as that got fixed, we got busy writing libraries that reimplemented the browser within itself, only slower.

I fear that if Nilay got his wish and mobile browsers made a quantum leap in performance tomorrow, the result would be even more bloated JavaScript for even more ads and trackers on websites like The Verge.

If anything, browser makers might have to take more drastic steps to route around the damage of bloated websites with invasive tracking.

We’ve been here before. When JavaScript first landed in web browsers, it was quickly adopted for three primary use cases:

  1. swapping out images when the user moused over a link,
  2. doing really bad client-side form validation, and
  3. spawning pop-up windows.

The first use case was so popular, it was moved from a procedural language (JavaScript) to a declarative language (CSS). The second use case is still with us today. The third use case was solved by browsers. They added a preference to block unwanted pop-ups.

Tracking and advertising scripts are today’s equivalent of pop-up windows. There are already plenty of tools out there to route around their damage: Ghostery, Adblock Plus, etc., along with tools like Instapaper, Readability, and Pocket.

I’m sure that business owners felt the same way about pop-up ads back in the late ’90s. Just the price of doing business. Shrug shoulders. Just the way things are. Nothing we can do to change that.

For such a young, supposedly-innovative industry, I’m often amazed at what people choose to treat as immovable, unchangeable, carved-in-stone issues. Bloated, invasive ad tracking isn’t a law of nature. It’s a choice. We can choose to change.

Every bloated advertising and tracking script on a website was added by a person. What if that person refused? I guess that person would be fired and another person would be told to add the script. What if that person refused? What if we had a web developer picket line that we collectively refused to cross?

That’s an unrealistic, drastic suggestion. But the way that the web is being destroyed by our collective culpability calls for drastic measures.

By the way, the pop-up ad was first created by Ethan Zuckerman. He has since apologised. What will you be apologising for in decades to come?

Instantiation

When I give talks or workshops, I sometimes get a bit ranty. One of the richest seams of rantiness comes from me complaining about how we web designers and developers are responsible for making the web a hostile place. “Stop getting the web wrong!” I might shout, like an old man yelling at a cloud. I point to services like Instapaper and Readability and describe their existence as a damning indictment of our work.

Don’t get me wrong—I really like Instapaper, Readability, RSS readers, or any other tools that allow people to read what they want when they want it. But think about their fundamental selling point: get to the content you want without having to wade through the cruft. That cruft was put there by us.

So-called modern web design and development is damage that people have to route around.

(Ooh, I can feel myself coming over all ranty and angry again! Calm down, Jeremy, calm down!)

And. Breathe.

Now there’s a new tool to the add to the list: Facebook Instant. Again, I think it’s actually pretty great that this service exists. But once again, it should make us ashamed of the work we’re collectively producing.

In this case, the service is—somewhat ironically—explicitly touting the performance benefits of not going to a website to read an article. Quite right.

PPK points to tools as the source of the problem and Marco Arment agrees:

The entire culture dominant among web developers today is bizarrely framework-heavy, with seemingly no thought given to minimizing dependencies and page weight.

But I think it’s a bit more subtle than that. As John Gruber says:

Business development deals have created problems that no web developer can solve. There’s no way to make a web page with a full-screen content-obscuring ad anything other than a shitty experience.

Now you might be saying to yourself “Well, I’ve never made a bloated web page!” or “I’ve never slapped loads of intrusive crap over the content!” I’d certainly like to think that I can look at my track record and hold my head up reasonably high. But that doesn’t matter. If the overall perception is that going to a URL to read an article is a pain in the ass, it hurts all of us.

Take this article from M.G. Siegler:

Not only is the web not fast enough for apps, it’s not fast enough for text either. …on mobile, the web browser just isn’t cutting it. … Native apps provide a better user experience on mobile than a web browser.

On the face of it, this is kind of a bizarre claim. After all, there’s nothing inherent in web browsers that makes them slow at rendering text—quite the opposite! And native apps still use HTTP (and often HTML) to fetch content; the network doesn’t suddenly get magically faster just because the piece of software requesting a resource doesn’t happen to be a web browser.

But this conflation of slow websites and slow web browsers is perfectly understandable. If it looks like a slow duck, and it quacks like a slow duck, then why not conclude that ducks are slow? Even if we know that there’s nothing inherently slow about making web pages:

My hope is that Facebook Instant will shake things up a bit. M.G. Siegler again:

At the very least, Facebook has put everyone else on notice. Your content better load fast or you’re screwed. Publication websites have become an absolutely bloated mess. They range from beautiful (The Verge) to atrocious (Bloomberg) to unusable (Forbes). The common denominator: they’re all way too slow.

There needs to be a cultural change in how we approach building for the web. Yes, some of the tools we choose are part of the problem, but the bigger problem is that performance still isn’t being recognised as the most important factor in how people feel about websites (and by extension, the web). This isn’t just a developer issue. It’s a design issue. It’s a UX issue. It’s a business issue. Performance is everybody’s collective responsibility.

I’d better stop now before I start getting all ranty again.

I’ll leave you with some other writings on this topic…

Tim Kadlec talks about choosing performance:

It’s not because of any sort of technical limitations. No, if a website is slow it’s because performance was not prioritized. It’s because when push came to shove, time and resources were spent on other features of a site and not on making sure that site loads quickly.

Jim Ray points out that “we learned the wrong lesson from the rise of mobile and the app ecosystem”:

We’ve spent far too long trying to compete with native experiences by making our websites look and behave like apps. This includes not just thousands of lines of JavaScript to mimic native app swipes and scrolling but even the lower overhead aesthetics of fixed position headers and persistent navigation.

(*cough*Flipboard*cough*)

Finally, Baldur Bjarnason has written a terrific piece:

The web doesn’t suck. Your websites suck.

All of your websites suck.

You destroy basic usability by hijacking the scrollbar. You take native functionality (scrolling, selection, links, loading) that is fast and efficient and you rewrite it with ‘cutting edge’ javascript toolkits and frameworks so that it is slow and buggy and broken. You balloon your websites with megabytes of cruft. You ignore best practices. You take something that works and is complementary to your business and turn it into a liability.

The lousy performance of your websites becomes a defensive moat around Facebook.

Go read the whole thing—it’s terrific:

This is a long-standing debate. Except it’s only long-standing among web developers. Columnists, managers, pundits, and journalists seem to have no interest in understanding the technical foundation of their livelihoods. Instead they are content with assuming that Facebook can somehow magically render HTML over HTTP faster than anybody else and there is nothing anybody can do to make their crap scroll-jacking websites faster. They buy into the myth that the web is incapable of delivering on its core capabilities: delivering hypertext and images quickly to a diverse and connected readership.

100 words 058

PPK writes of modern web development:

Tools don’t solve problems any more, they have become the problem.

I think he’s mostly correct, but I think there is some clarification required.

Web development tools fall into two broad categories:

  1. Local tools like preprocessors, task managers, and version control systems that help the developer output their own HTML, CSS, and JavaScript.
  2. Tools written in HTML, CSS, and JavaScript that the end user has to download for the developer to gain benefit.

It’s that second category that contain a tax on the end user. Stop solving problems you don’t yet have.

Inlining critical CSS for first-time visits

After listening to Scott rave on about how much of a perceived-performance benefit he got from inlining critical CSS on first load, I thought I’d give it a shot over at The Session. On the chance that this might be useful for others, I figured I’d document what I did.

The idea here is that you can give a massive boost to the perceived performance of the first page load on a site by putting the most important CSS in the head of the page. Then you cache the full stylesheet. For subsequent visits you only ever use the external stylesheet. So if you’re squeamish at the thought of munging your CSS into your HTML (and that’s a perfectly reasonable reaction), don’t worry—this is a temporary workaround just for initial visits.

My particular technology stack here is using Grunt, Apache, and PHP with Twig templates. But I’m sure you can adapt this for other technology stacks: what’s important here isn’t the technology, it’s the thinking behind it. And anyway, the end user never sees any of those technologies: the end user gets HTML, CSS, and JavaScript. As long as that’s what you’re outputting, the specifics of the technology stack really don’t matter.

Generating the critical CSS

Okay. First question: how do you figure out which CSS is critical and which CSS can be deferred?

To help answer that, and automate the task of generating the critical CSS, Filament Group have made a Grunt task called grunt-criticalcss. I added that to my project and updated my Gruntfile accordingly:

grunt.initConfig({
    // All my existing Grunt configuration goes here.
    criticalcss: {
        dist: {
            options: {
                url: 'http://thesession.dev',
                width: 1024,
                height: 800,
                filename: '/path/to/main.css',
                outputfile: '/path/to/critical.css'
            }
        }
    }
});

I’m giving it the name of my locally-hosted version of the site and some parameters to judge which CSS to prioritise. Those parameters are viewport width and height. Now, that’s not a perfect way of judging which CSS matters most, but it’ll do.

Then I add it to the list of Grunt tasks:

// All my existing Grunt tasks go here.
grunt.loadNpmTasks('grunt-criticalcss');

grunt.registerTask('default', ['sass', etc., 'criticalcss']);

The end result is that I’ve got two CSS files: the full stylesheet (called something like main.css) and a stylesheet that only contains the critical styles (called critical.css).

Cache-busting CSS

Okay, this is a bit of a tangent but trust me, it’s going to be relevant…

Most of the time it’s a very good thing that browsers cache external CSS files. But if you’ve made a change to that CSS file, then that feature becomes a bug: you need some way of telling the browser that the CSS file has been updated. The simplest way to do this is to change the name of the file so that the browser sees it as a whole new asset to be cached.

You could use query strings to do this cache-busting but that has some issues. I use a little bit of Apache rewriting to get a similar effect. I point browsers to CSS files like this:

<link rel="stylesheet" href="/css/main.20150310.css">

Now, there isn’t actually a file named main.20150310.css, it’s just called main.css. To tell the server where the actual file is, I use this rewrite rule:

RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.+).(d+).(js|css)$ $1.$3 [L]

That tells the server to ignore those numbers in JavaScript and CSS file names, but the browser will still interpret it as a new file whenever I update that number. You can do that in a .htaccess file or directly in the Apache configuration.

Right. With that little detour out of the way, let’s get back to the issue of inlining critical CSS.

Differentiating repeat visits

That number that I’m putting into the filenames of my CSS is something I update in my Twig template, like this (although this is really something that a Grunt task could do, I guess):

{% set cssupdate = '20150310' %}

Then I can use it like this:

<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">

I can also use JavaScript to store that number in a cookie called csscached so I’ll know if the user has a cached version of this revision of the stylesheet:

<script>
document.cookie = 'csscached={{ cssupdate }};expires="Tue, 19 Jan 2038 03:14:07 GMT";path=/';
</script>

The absence or presence of that cookie is going to be what determines whether the user gets inlined critical CSS (a first-time visitor, or a visitor with an out-of-date cached stylesheet) or whether the user gets a good ol’ fashioned external stylesheet (a repeat visitor with an up-to-date version of the stylesheet in their cache).

Here are the steps I’m going through:

First of all, set the Twig cssupdate variable to the last revision of the CSS:

{% set cssupdate = '20150310' %}

Next, check to see if there’s a cookie called csscached that matches the value of the latest revision. If there is, great! This is a repeat visitor with an up-to-date cache. Give ‘em the external stylesheet:

{% if _cookie.csscached == cssupdate %}
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">

If not, then dump the critical CSS straight into the head of the document:

{% else %}
<style>
{% include '/css/critical.css' %}
</style>

Now I still want to load the full stylesheet but I don’t want it to be a blocking request. I can do this using JavaScript. Once again it’s Filament Group to the rescue with their loadCSS script:

 <script>
    // include loadCSS here...
    loadCSS('/css/main.{{ cssupdate }}.css');

While I’m at it, I store the value of cssupdate in the csscached cookie:

    document.cookie = 'csscached={{ cssupdate }};expires="Tue, 19 Jan 2038 03:14:07 GMT";path=/';
</script>

Finally, consider the possibility that JavaScript isn’t available and link to the full CSS file inside a noscript element:

<noscript>
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">
</noscript>
{% endif %}

And we’re done. Phew!

Here’s how it looks all together in my Twig template:

{% set cssupdate = '20150310' %}
{% if _cookie.csscached == cssupdate %}
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">
{% else %}
<style>
{% include '/css/critical.css' %}
</style>
<script>
// include loadCSS here...
loadCSS('/css/main.{{ cssupdate }}.css');
document.cookie = 'csscached={{ cssupdate }};expires="Tue, 19 Jan 2038 03:14:07 GMT";path=/';
</script>
<noscript>
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">
</noscript>
{% endif %}

You can see the production code from The Session in this gist. I’ve tweaked the loadCSS script slightly to match my preferred JavaScript style but otherwise, it’s doing exactly what I’ve outlined here.

The result

According to Google’s PageSpeed Insights, I done good.

Optimising https://thesession.org/

A question of timing

I’ve been updating my collection of design principles lately, adding in some more examples from Android and Windows. Coincidentally, Vasilis unveiled a neat little page that grabs one list of principles at random —just keep refreshing to see more.

I also added this list of seven principles of rich web applications to the collection, although they feel a bit more like engineering principles than design principles per se. That said, they’re really, really good. Every single one is rooted in performance and the user’s experience, not developer convenience.

Don’t get me wrong: developer convenience is very, very important. Nobody wants to feel like they’re doing unnecessary work. But I feel very strongly that the needs of the end user should trump the needs of the developer in almost all instances (you may feel differently and that’s absolutely fine; we’ll agree to differ).

That push and pull between developer convenience and user experience is, I think, most evident in the first principle: server-rendered pages are not optional. Now before you jump to conclusions, the author is not saying that you should never do client-side rendering, but instead points out the very important performance benefits of having the server render the initial page. After that—if the user’s browser cuts the mustard—you can use client-side rendering exclusively.

The issue with that hybrid approach—as I’ve discussed before—is that it’s hard. Isomorphic JavaScript (terrible name) can theoretically help here, but I haven’t seen too many examples of it in action. I suspect that’s because this approach doesn’t yet offer enough developer convenience.

Anyway, I found myself nodding along enthusiastically with that first of seven design principles. Then I got to the second one: act immediately on user input. That sounds eminently sensible, and it’s backed up with sound reasoning. But it finishes with:

Techniques like PJAX or TurboLinks unfortunately largely miss out on the opportunities described in this section.

Ah. See, I’m a big fan of PJAX. It’s essentially the same thing as the Hijax technique I talked about many years ago in Bulletproof Ajax, but with the new addition of HTML5’s History API. It’s a quick’n’dirty way of giving the illusion of a fat client: all the work is actually being done in the server, which sends back chunks of HTML that update the interface. But it’s true that, because of that round-trip to the server, there’s a bit of a delay and so you often end up briefly displaying a loading indicator.

I contend that spinners or “loading indicators” should become a rarity

I agree …but I also like using PJAX/Hijax. Now how do I reconcile what’s best for the user experience with what’s best for my own developer convenience?

I’ve come up with a compromise, and you can see it in action on The Session. There are multiple examples of PJAX in action on that site, like pretty much any page that returns paginated results: new tune settings, the latest events, and so on. The steps for initiating an Ajax request used to be:

  1. Listen for any clicks on the page,
  2. If a “previous” or “next” button is clicked, then:
  3. Display a loading indicator,
  4. Request the new data from the server, and
  5. Update the page with the new data.

In one sense, I am acting immediately to user input, because I always display the loading indicator straight away. But because the loading indicator always appears, no matter how fast or slow the server responds, it sometimes only appears very briefly—just for a flash. In that situation, I wonder if it’s serving any purpose. It might even be doing the opposite to its intended purpose—it draws attention to the fact that there’s a round-trip to the server.

“What if”, I asked myself, “I only showed the loading indicator if the server is taking too long to send a response back?”

The updated flow now looks like this:

  1. Listen for any clicks on the page,
  2. If a “previous” or “next” button is clicked, then:
  3. Start a timer, and
  4. Request the new data from the server.
  5. If the timer reaches an upper limit, show a loading indicator.
  6. When the server sends a response, cancel the timer and
  7. Update the page with the new data.

Even though there are more steps, there’s actually less happening from the user’s perspective. Where previously you would experience this:

  1. I click on a button,
  2. I briefly see a loading indicator,
  3. I see the new data.

Now your experience is:

  1. I click on a button,
  2. I see the new data.

…unless the server or the network is taking too long, in which case the loading indicator appears as an interim step.

The question is: how long is too long? How long do I wait before showing the loading indicator?

The Nielsen Norman group offers this bit of research:

0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.

So I should set my timer to 100 milliseconds. In practice, I found that I can set it to as high as 200 to 250 milliseconds and keep it feeling very close to instantaneous. Anything over that, though, and it’s probably best to display a loading indicator: otherwise the interface starts to feel a little sluggish, and slightly uncanny. (“Did that click do any—? Oh, it did.”)

You can test the response time by looking at some of the simpler pagination examples on The Session: new recordings or new discussions, for example. To see examples of when the server takes a bit longer to send a response, you can try paginating through search results. These take longer because, frankly, I’m not very good at optimising some of those search queries.

There you have it: an interface that—under optimal conditions—reacts to user input instantaneously, but falls back to displaying a loading indicator when conditions are less than ideal. The result is something that feels like a client-side web thang, even though the actual complexity is on the server.

Now to see what else I can learn from the rest of those design principles.

The dConstruct 2012 website

I got an email recently from the guys at Cyber Duck asking me about the process behind the dConstruct 2012 website, beautifully designed by Bevan. Ethan actually used it as an example in his An Event Apart talk earlier this week. Anyway, here’s what I wrote…

The dConstruct conference takes place on the first Friday of September every year, and every year the conference has a different theme. That theme then influences the visual design of the site. To start with, we throw up a quick holding page and then, once we’ve got our speakers all set, we launch the site proper, usually a month or so before tickets go on sale.

At Clearleft, we believe very strongly in the universality of the web. We wanted the information on the 2012 dConstruct website to be available to anybody with an internet connection, no matter what kind of device or browser they’re using. That does not mean that the site should look and behave exactly the same in every browser or on every device. That isn’t practical. Nor is it desirable, in my opinion. Better browsers should be rewarded with a better experience. But every browser should be able to access the content. The best way to achieve that balance is through progressive enhancement. Responsive web design—when it’s done mobile first—is an excellent example of progressive enhancement in action.

The theme for dConstruct 2012 was “Playing With The Future”. It would be easy to go overboard with a visual design based on that theme, so we made sure to reign things in a bit and keep it fairly subtle. The colour scheme evolved from previous years, going in a more pastel direction. The use of Futura for headline text was the biggest change.

Those colours (muted green, red, and blue) carried through to the imagery. In the case of a conference website, the imagery is primarily photographs of speakers. That usually means JPEGs and sometimes those JPEGs can get pretty weighty. In this case, the monochrome nature of the images meant that we could use PNGs. Not only that, but through a little experimentation, we were able to get away with sometimes using as few as 16 colours for the PNG. That meant the file sizes could be nice and small. The average speaker photo was around 12K in weight.

Each speaker photo is 200x200 pixels in size. Now, you might think that we’d want to make those bigger as we moved up from small screen sizes to larger, desktop sizes. But actually, because the layout changes to put more of the photos side-by-side as the viewport gets larger, there was no need to do any clever responsive image-swapping. Instead, we spent that time getting the images as small in file size as we possibly could. The ImageOptim app for Mac is very handy for helping with this.

There are also some background images (for social media icons, background textures, and the like). These were all Base 64-encoded into the stylesheet to avoid extra HTTP requests.

The priority was very much on keeping things speedy. When talking about responsive design, there’s a lot of emphasis on layout but actually that was a relatively straightforward part of the 2012 dConstruct site: there’s nothing too complicated going on there. Instead, the focus was on performance balanced with a striking visual design.

On the individual speaker pages, there’s a bit of conditional loading going on. For example, most pages include a link to a video on YouTube or Vimeo. On larger screen sizes, there’s a bit of JavaScript to pull in that video and display it right on the page. Crucially, this JavaScript runs after the rest of the document has already loaded so it won’t block the rendering. The end result is that everyone has access to the video: on smaller screens, it’s available by following a link; on larger screens, it’s available in situ.

JavaScript is only ever used to enhance, never as a requirement for core functionality. The navigation, for example, has a nice toggle-to-reveal behaviour on small screens if JavaScript is available. But if JavaScript isn’t available or doesn’t load for some reason, then the navigation is simply visible by default. It’s important to consider safe defaults before adding behavioural enhancements.

In retrospect, it probably would’ve made more sense to simply inline the JavaScript at the bottom of each page: the external file isn’t very big at all, and that extra HTTP request could’ve been saved.

There were some other things that could’ve been done better: some of the images might have been better as SVG (the logo, for example). But all those lessons were carried forward and so the site for dConstruct 2013 is even snappier and more performant.

The test

There was once a time when the first thing you would do when you went to visit a newly-launched website was to run its markup through a validator.

Later on that was replaced by the action of bumping up the font size by a few notches—what Dan called the Dig Dug test.

Thanks to Ethan, we all started to make our browser windows smaller and bigger as soon as we visited a newly-launched site.

Now when I go to a brand new site I find myself opening up the “Network” tab in my browser’s developer tools to count the HTTP requests and measure the page weight.

Just like old times.

dConstruct optimisation

When I was helping Bevan with making the dConstruct site, I kept banging on to him about the importance of performance.

Don’t get me wrong: I wanted the site to look great, but I also very much wanted it to feel great …and nothing affects the feel of a site (the user’s experience, if you will) more than performance. As Jason wrote:

If you could only do one thing to prepare your desktop site for mobile and had to choose between employing media queries to make it look good on a mobile device or optimizing the site for performance, you would be better served by making the desktop site blazingly fast.

And yet this fundamental aspect of how performant a site is going to be is all too often left until the development phase. I’d really like to see it taken into account much earlier on, during the UX and visual design phases.

Anyway, as the dConstruct site came together, I just kept asking “What would Steve Souders do?”

For a start, that meant ripping out any boilerplate markup and CSS that was there “just in case.” I very much agree with Rachel when she says stop solving problems you don’t yet have. But one of the areas where the unfortunately-named HTML5 Boilerplate excels is in its suggestions for .htaccess rules so I made sure to rip off the best bits.

Initially jQuery was being included, but given how far browsers have come in their JavaScript support, I was able to ditch it and streamline the JavaScript a bit.

Wherever possible, I made sure that background images in CSS were Base64 encoded as data URIs; icons, textures, and the like. That helped to reduce the number of HTTP requests—one of the easy wins for improving performance.

I’ve already mentioned the conditional loading that’s going on.

Then there’s the thorny issue of responsive images. The dConstruct 2012 site is similar to the dConstruct archive in that there is no correlation between browser width and image: quite often, a smaller image is required for wider screens than for narrower viewports because of the presence of a grid. So instead of trying to come up with some complex interplay of client and server cross-communication to figure out which size image would be appropriate to serve up, I instead took the same approach as I did for the archive site: optimise the hell out of images, regardless of whether they’re going to be viewed in a desktop or a mobile device.

Take a look at the original image of Kevin Slavin compared to the version that appears on the dConstruct archive.

Kevin Slavin Kevin Slavin, retouched

See how everything except the face is so much blurrier in the final version? That isn’t just an attempt to introduce some cool bokeh. It makes for much smaller JPGs with fewer jaggy artefacts. And because human beings tend to focus on other human faces, the technique isn’t really consciously noticeable (although you’ll notice it now that I’ve pointed it out to you).

The design of the 2012 dConstruct site called for monochrome images with colour filters applied.

Ben Hammersley

That turned out to be a fortunate boon for optimising the images. This time we were using PNGs rather than JPGs and we were able to get the number of colours down to 32 or even 16. Run them through Image Optim or Smushit and you can squeeze even more bytes out of them.

The funny thing is that sweating the file sizes of images used to be part and parcel of web development. Back in the nineties, there was something of an aesthetic that grew out of the need to optimise images with limited (web-safe!) colour palettes. That was because bandwidth was at a premium and you could be pretty sure that plenty of people were accessing your site on slow connections.

Well, here we are fifteen years later and thanks to the rise of mobile, bandwidth is once again at a premium and we can be pretty sure that plenty of people are accessing our sites on slow connections. Yet again, mobile is highlighting issues that were always there. When did we get so lazy and decide it was acceptable to send giant unoptimised images down the pipe to our long-suffering visitors?

Mathew Pennell recently wrote:

…it’s certainly true that the golden rule I grew up with – no page should ever be over 100Kb – has long since been mothballed.

But why? That seems like a perfectly good and still-relevant rule to me.

Alas, on the dConstruct site I wasn’t able to hit that target. With an unprimed cache, the home page comes in at around 300K (it’s 17K with a primed cache). By far the largest file is the CSS, weighing at 113K, followed by the web font, Futura bold oblique, at 32K.

By the way, when it comes to analysing performance in the browser, this missing manual for the Webkit inspector is really, really handy. I also ran the site through Google Page Speed but it seems that the user-agent chooses an arbitrary browser width (960? 1024?) so some of the advice about scaling images needs to be taken with a pinch of salt when applied to responsive designs.

I took a look at some other conference sites too. The beautiful site for the Build conference comes in at just under a megabyte for the homepage—it has quite a few fonts and images. It also has a monochrome aesthetic going on so I suspect quite a few of those images could be squeezed down (and some far-future expiry dates would help for repeat visitors).

Then there’s site for this year’s Mobilism conference which is blazingly fast. The combined file size on the homepage isn’t that different to the dConstruct site (although the CSS is significantly smaller) and I suspect there’s some server-side wizardry going on. I’ll have to corner Stephen at the conference next week and quiz him about it.

For now, server-side performance optimisation is something beyond my ken. I should really do something about that, especially as I’m expecting the dConstruct site to take a hammering the day that tickets go sale (May 29th—save the date).

In the meantime, there’s still plenty I can do on the front end. As Bruce put it:

It seems to me that old-fashioned, oh-so-dull techniques might not be ready for retirement yet. You know: well-crafted HTML, keeping JavaScript for progressive enhancement rather than a pre-requisite for the page even displaying, and testing across browsers.

All those optimisation techniques we learned in the 90s—and even wacky ideas like lowsrc—are back in fashion. Everything old is new again.

One web

I was in Dublin recently to give a little talk at the 24 Hour Universal Design Challenge 2010. It was an interesting opportunity to present my own perspective on web design to an audience that consisted not just of web designers, but designers from many different fields.

I gave an overview of the past, present and future of web design as seen from where I’m standing. You can take a look at the slides but my usual caveat applies: they won’t make much sense out of context. There is, however, a transcript of the talk on the way (the whole thing was being captioned live on the day).

Towards the end of my spiel, I pointed to Tim Berners-Lee’s recent excellent article in Scientific American, Long Live the Web: A Call for Continued Open Standards and Neutrality:

The primary design principle underlying the Web’s usefulness and growth is universality. When you make a link, you can link to anything. That means people must be able to put anything on the Web, no matter what computer they have, software they use or human language they speak and regardless of whether they have a wired or wireless Internet connection. The Web should be usable by people with disabilities. It must work with any form of information, be it a document or a point of data, and information of any quality—from a silly tweet to a scholarly paper. And it should be accessible from any kind of hardware that can connect to the Internet: stationary or mobile, small screen or large.

We’re at an interesting crossroads right now. Recent developments in areas like performance and responsive design means that we can realistically pursue that vision of serving up content at a URL to everyone to the best ability of their device. At the same time, the opposite approach—creating multiple, tailored URLs—is currently a popular technique.

At the most egregious and nefarious end of the spectrum, there’s Google’s disgusting backtracking on net neutrality which hinges on a central conceit that spits in the face of universality:

…we both recognize that wireless broadband is different from the traditional wireline world, in part because the mobile marketplace is more competitive and changing rapidly. In recognition of the still-nascent nature of the wireless broadband marketplace, under this proposal we would not now apply most of the wireline principles to wireless…

That’s the fat end of the wedge: literally having a different set of rules for one group of users based on something as arbitrary as how they are connecting to the network.

Meanwhile, over on the thin end of the wedge, there’s the fashion for serving up the same content at different URLs to different devices (often segregated within a subdomain like m. or mobile.—still better than the crack-smoking-inspired .mobi idea).

It’s not a bad technique at all, and it has served the web reasonably well as we collectively try to get our heads around the expanded browser and device landscape of recent years …although some of us cringe at the inherent reliance on browser-sniffing. At least the best practice in this area is to always offer a link back to the “regular” site.

Still, although the practice of splintering up the same content to separate URLs and devices has been a useful interim step, it just doesn’t scale. It’s also unnecessary.

Most of the time, creating a separate mobile website is simply a cop-out.

Hear me out.

First of all, I said “most of the time.” Maybe Garrett is onto something when he says:

It seems responsive pages are best for content while dedicated mobile pages are best for interactive applications. Discuss.

Although, as I pointed out in my brief list of false dichotomies, there’s no clear delineation between documents and applications (just as there’s no longer any clear delineation between desktop and mobile).

Still, let’s assume we’re talking about content-based sites. Segregating the same content into different URLs seems like a lot of work (quite apart from violating the principle of universality) if all you’re going to do is remove some crud that isn’t necessary in the first place.

As an example, here’s an article from The Guardian’s mobile site and here’s the same article as served up on the www. subdomain.

Leaving aside the way that the width is inexplicably set to a fixed number of pixels, it’s a really well-executed mobile site. The core content is presented very nicely. The cruft is gone.

But then, if that cruft is unnecessary, why serve it up on the “desktop” version? I can see how it might seem like a waste not to use extra screen space and bandwidth if it’s available, but I’d love see an approach that’s truly based on progressive enhancement. Begin with the basic content; structure it to best fit the screen using media queries or some other form of feature detection (not browser detection); pull in extra content for large-screen user-agents, perhaps using some form of lazy loading. To put it in semantic terms, if the content could be marked up as an aside, it may be a prime candidate for lazy loading based on device capability:

The aside element represents a section of a page that consists of content that is tangentially related to the content around the aside element, and which could be considered separate from that content.

I’m being unfair on The Guardian site …and most content-based sites with a similar strategy. Almost every site that has an accompanying mobile version—Twitter, Flickr, Wikipedia, BBC—began life when the desktop was very much in the ascendency. If those sites were being built today, they might well choose a more responsive, scalable solution.

It’s very, very hard to change an entire existing site. There’s a lot of inertia to battle against. Also, let’s face it: most design processes are not suited to solving problems in a device-independent, content-out way. It’s going to be challenging for large organisations to adjust to this way of thinking. It’s going to be equally challenging for agencies to sell this approach to clients—although I feel Clearleft may have a bit of an advantage in having designers like Paul who really get it. I think a lot of the innovation in this area will come from more nimble quarters: personal projects and small startups, most likely.

37 Signals recently documented some of their experiments with responsive design. As it turned out, they had a relatively easy time of it because they were already using a flexible approach to layout:

The key to making it easy was that the layout was already liquid, so optimizing it for small screens meant collapsing a few margins to maximize space and tweaking the sidebar layout in the cases where the screen is too narrow to show two columns.

In the comments, James Pearce, who is not a fan of responsiveness, was quick to cry foul:

I think you should stress that building a good mobile site or app probably takes more effort than flowing a desktop page onto a narrower piece of glass. The mobile user on the other side will quite possibly want to do different things to their desktop brethren, and deserves more than some pixel shuffling.

But the very next comment gets to the heart of why this well-intentioned approach can be so destructive:

A lot of mobile sites I’ve seen are dumbed down versions of the full thing, which is really annoying when you find that the feature you want isn’t there. The design here is the same site adapted to different screens, meaning the end product doesn’t lose any functionality. I think this is much better than making decisions for your users as to what they will and won’t want to see on their mobile phone.

I concur. I think there’s a real danger in attempting to do the right thing by denying users access to content and functionality “for their own good.” It’s patronising and condescending to assume you know the wants and needs of a visitor to your site based purely on their device.

The most commonly-heard criticism of serving up the same website to everyone is that the existing pages are too large, either in size or content. I agree. Far too many URLs resolve to bloated pages locked to a single-width layout. But the solution is not to make leaner, faster pages just for mobile users; the answer is to make leaner, faster pages for everybody.

Even the brilliant Bryan Rieger, who is doing some of the best responsive web design work on the planet with the Yiibu site, still talks about optimising only for certain users in his otherwise-excellent presentation, The End of Unlimited Bandwidth.

When I was reading the W3C’s Mobile Web Best Practices, I was struck by how much of it is sensible advice for web development in general, rather than web development specifically for mobile.

This is why I’m saying that most of the time, creating a separate mobile website is simply a cop-out. It’s a tacit acknowledgement that the regular “desktop” site is beyond help. The cop-out is creating an optimised experience for one subset of users while abandoning others to their bloated fate.

A few years back, there was a trend to provide separate text-only “accessible” websites, effectively ghettoising some users. Nowadays, it’s clear that universal design is a more inclusive, more maintainable approach. I hope that the current ghettoisation of mobile users will also end.

I’m with Team Timbo. One web.