Tags: images



The dConstruct 2012 website

I got an email recently from the guys at Cyber Duck asking me about the process behind the dConstruct 2012 website, beautifully designed by Bevan. Ethan actually used it as an example in his An Event Apart talk earlier this week. Anyway, here’s what I wrote…

The dConstruct conference takes place on the first Friday of September every year, and every year the conference has a different theme. That theme then influences the visual design of the site. To start with, we throw up a quick holding page and then, once we’ve got our speakers all set, we launch the site proper, usually a month or so before tickets go on sale.

At Clearleft, we believe very strongly in the universality of the web. We wanted the information on the 2012 dConstruct website to be available to anybody with an internet connection, no matter what kind of device or browser they’re using. That does not mean that the site should look and behave exactly the same in every browser or on every device. That isn’t practical. Nor is it desirable, in my opinion. Better browsers should be rewarded with a better experience. But every browser should be able to access the content. The best way to achieve that balance is through progressive enhancement. Responsive web design—when it’s done mobile first—is an excellent example of progressive enhancement in action.

The theme for dConstruct 2012 was “Playing With The Future”. It would be easy to go overboard with a visual design based on that theme, so we made sure to reign things in a bit and keep it fairly subtle. The colour scheme evolved from previous years, going in a more pastel direction. The use of Futura for headline text was the biggest change.

Those colours (muted green, red, and blue) carried through to the imagery. In the case of a conference website, the imagery is primarily photographs of speakers. That usually means JPEGs and sometimes those JPEGs can get pretty weighty. In this case, the monochrome nature of the images meant that we could use PNGs. Not only that, but through a little experimentation, we were able to get away with sometimes using as few as 16 colours for the PNG. That meant the file sizes could be nice and small. The average speaker photo was around 12K in weight.

Each speaker photo is 200x200 pixels in size. Now, you might think that we’d want to make those bigger as we moved up from small screen sizes to larger, desktop sizes. But actually, because the layout changes to put more of the photos side-by-side as the viewport gets larger, there was no need to do any clever responsive image-swapping. Instead, we spent that time getting the images as small in file size as we possibly could. The ImageOptim app for Mac is very handy for helping with this.

There are also some background images (for social media icons, background textures, and the like). These were all Base 64-encoded into the stylesheet to avoid extra HTTP requests.

The priority was very much on keeping things speedy. When talking about responsive design, there’s a lot of emphasis on layout but actually that was a relatively straightforward part of the 2012 dConstruct site: there’s nothing too complicated going on there. Instead, the focus was on performance balanced with a striking visual design.

On the individual speaker pages, there’s a bit of conditional loading going on. For example, most pages include a link to a video on YouTube or Vimeo. On larger screen sizes, there’s a bit of JavaScript to pull in that video and display it right on the page. Crucially, this JavaScript runs after the rest of the document has already loaded so it won’t block the rendering. The end result is that everyone has access to the video: on smaller screens, it’s available by following a link; on larger screens, it’s available in situ.

JavaScript is only ever used to enhance, never as a requirement for core functionality. The navigation, for example, has a nice toggle-to-reveal behaviour on small screens if JavaScript is available. But if JavaScript isn’t available or doesn’t load for some reason, then the navigation is simply visible by default. It’s important to consider safe defaults before adding behavioural enhancements.

In retrospect, it probably would’ve made more sense to simply inline the JavaScript at the bottom of each page: the external file isn’t very big at all, and that extra HTTP request could’ve been saved.

There were some other things that could’ve been done better: some of the images might have been better as SVG (the logo, for example). But all those lessons were carried forward and so the site for dConstruct 2013 is even snappier and more performant.

Iconic imagery

There’s been some fantastic collaborative work done recently on the tricky issue of responsive images. Witness the community group and its attendant website, complete with logo.

Meanwhile, there’s been some great research into dealing with high-DPI displays (which the world and its dog have decided to label “retina”). There’s the in-depth analysis by Daan Jobsis which looks at what you can get away with when it comes to compression and quality for “retina” displays: quite a lot, as it turns out.

In fact, you may well be able to double the dimensions of an image while simultaneously bringing down its quality and end up with an image that is smaller in file size than the original, while still looking great on high-DP..“retina” displays. The guys over at Filament group have labelled this Compressive Images. Nice.

I like that approach. No JavaScript polyfills. No lobbying of standards bodies.

I’m generally fan of solutions that look for ways of avoiding the problem in the first place. Hence my approach to image optimisation for all devices, widescreen or narrow.

Of course this whole issue of responsive (or compressive) images should really only apply to photographic imagery. If you’re dealing with “text as images” …don’t. Use web fonts. If you’re dealing with logos or icons, there are other options, like SVG.

Then there’s the combination of web fonts and iconography. Why not use a small web font containing just the icons you need?

I tried this recently, diligently following Josh’s excellent blog post detailing how to get icon shapes out of Fireworks, into a font editor, and then into an actual font. It works a treat, although I concur with Josh’s suggestion that the technique should really only be implemented using the ::before and ::after pseudo-elements in combination with base-64 encoding the font file. That means it won’t work in every single browser, but that’s the point: these icons should be an enhancement, not a requirement.

Having gone through the tortuous steps required to get my Mac all set up with the software required to follow Josh’s tutorial, I then spotted the note at the end of his article that pointed to Icomoon. That turns out to be a fantastic service. You can pick and choose from the icons provided or you can upload your own vector shapes. Then you can assign the unicode slots you want to use for the icons and you can get the resulting font file base-64 encoded. Very, very cool!

There’s a whole slew of icon-font services like that out there now: Pictos, Web Symbols, and Symbolset with its ingenious use of ligatures to allow for an accessible fallback.

Jenn is currently casting a critical eye over each of these service over on the Nerdary: part one, part two, and part three are all deserving of your time and attention.

Secret src

There’s been quite a brouhaha over the past couple of days around the subject of standardising responsive images. There are two different matters here: the process and the technical details. I’d like to address both of them.

Ill communication

First of all, there’s a number of very smart developers who feel that they’ve been sidelined by the WHATWG. Tim has put together a timeline of what happened:

  1. Developers got involved in trying to standardize a solution to a common and important problem.
  2. The WHATWG told them to move the discussion to a community group.
  3. The discussion was moved (back in February), a general consenus (not unanimous, but a majority) was reached about the picture element.
  4. Another (partial) solution was proposed directly on the WHATWG list by an Apple employee.
  5. A discussion ensued regarding the two methods, where they overlapped, and how the general opinions of each. The majority of developers favored the picture element and the majority of implementors favored the srcset attribute.
  6. While the discussion was still taking place, and only 5 days after it was originally proposed, the srcset attribute (but not the picture element) was added to the draft.

A few points in that timeline have since been clarified. That second step—“The WHATWG told them to move the discussion to a community group”—turns out to be untrue. Some random person on the WHATWG mailing list (which is open to everyone) suggested forming a Community Group at the W3C. Alas, nobody else on the WHATWG mailing list corrected that suggestion.

Then there’s apparent causality between step 4 and 6. Initially, I also assumed that this was what happened: that Ted had proposed the srcset solution without even being aware of the picture solution that the Community Group had independently come up with it. It turns out that’s not the case. Ted had another email about the picture proposal but he never ended up sending it. In fact, his email about srcset had been sitting in draft for quite a while and he only sent it out when he saw that Hixie was finally collating feedback on responsive images.

So from the outside it looked like there was preferential treatment being given to Ted’s proposal because it came from within the WHATWG. That’s not the case, but it must be said: the fact that srcset was so quickly added to the spec (albeit in a different form) doesn’t look good. It’s easy to understand why the smart folks in the Responsive Images Community Group felt miffed.

But let’s be clear: this is exactly how the WHATWG is supposed to work. Use-cases are evaluated and whatever Hixie thinks is best solution gets put in the spec, regardless of how popular or unpopular it is.

Now, if that sounds abhorrent to you, I completely understand. A dictatorship should cause us to recoil.

That’s where the W3C come in. Their model is completely different. Everything is done by committee there.

Steve Faulkner chimed in on Tim’s post with his take on the two groups:

It seems like the development of HTML has turned full circle, the WHATWG was formed to overthrow the hegemony of the W3C, now the W3C acts as a counter to the hegemony of the WHATWG.

I think he’s right. The W3C keeps the rapid, sometimes anarchic approach of the WHATWG in check. But the opposite is also true. Without the impetus provided by the WHATWG, I’m not sure that the W3C HTML Working Group would ever get anything done. There’s a balance that actually works quite well in practice.

Back to the situation with responsive images…

Unfortunately, it appears to people within the Responsive Images Community Group that all their effort was wasted because their proposed solution was summarily rejected. In actuality all the use-cases they gathered were immensely valuable. But it’s certainly true that the WHATWG didn’t make it clearer how and where developers could best contribute.

Community Groups are a W3C creation. They don’t have anything to do with the WHATWG, who do all their work on their own mailing list, their own wiki and their own IRC channel.

I do think that the W3C Community Groups offer a good place to go bike-shedding on problems. That’s a term that’s usually used derisively but sometimes it’s good to have a good ol’ bike-shedding without clogging up the mailing list for everyone. But it needs to be clear that there’s a big difference between a Community Group and a Working Group.

I wish the WHATWG had done a better job of communicating to newcomers how best to contribute. It would have avoided a lot of the frustrations articulated by Wilto:

Unfortunately, we were laboring under the impression that Community Groups shared a deeper inherent connection with the standards bodies than it actually does.

But in any case, as Doctor Bruce writes at least now there’s a proposed solution for responsive images in HTML: The Living Standard:

I don’t really care which syntax makes the spec, as long as it addresses the majority of use cases and it is usable by authors. I’m just glad we’re discussing the adaptive image problem at all.

So let’s take a look at the technical details.

src code

The Responsive Images Community Group came up with a proposal based off the idea of minting a new element, called say picture, that mimics the behaviour of video

<picture alt="image description">
  <source src="/path/to/image.png" media="(min-width: 600px)">
  <source src="/path/to/otherimage.png" media="(min-width: 800px)">
  <img src="/path/to/image.png" alt="image description">

One of the reasons why a new element was chosen rather than extending the existing img element was due to a misunderstanding. The WHATWG had explained that the parsing of img couldn’t be easily altered. That means that img must remain a self-closing element—any solution that requires a closing /img tag wouldn’t work. Alas, that was taken to mean that extending the img element in any way was off the cards.

The picture proposal has a number of things going for it. Its syntax is easily understandable for authors: if you know media queries, then you know how to use picture. It also has a good fallback for older browsers : a regular img element. This fallback mechanism (and the idea of multiple source elements with media queries) is exactly how the video element is specced.

Unfortunately using media queries on the sources of videos has proven to be very tricky for implementors, so they don’t want to see that pattern repeated.

Another issue with multiple source elements is that parsers must wait until the closing /picture tag before they can even begin to evaluate which image to show. That’s not good for performance.

So the alternate solution, based on Ted’s proposal, extends the img element using a new srcset attribute that takes a comma-separated list of values:

<img alt="image description"
srcset="/path/to/image.png 800w, /path/to/otherimage.png 600w">

Not nearly as pretty, I think you’ll agree. But it is actually nice and compact for the “retina display” use-case:

<img alt="image description" src="/path/to/image.png" srcset="/path/to/otherimage.png 2x">

Just to be clear, that does not mean that otherimage.png is twice the size of image.png (though it could be). What you’re actually declaring is “Use image.png unless the device supports double-pixel density, in which case, use otherimage.png.”

Likewise, when I declare:

srcset="/path/to/image.png 600w 400h"

…it does not mean that image.png is 600 pixels wide by 400 pixels tall. Instead, it means that an action should be taken if the viewport matches those dimensions.

It took me a while to wrap my head around that distinction: I’m used to attributes describing the element they’re attached to, not the viewport.

Now for the really tricky bit: what do those numbers—600w and 400h—mean? Currently the spec is giving conflicting information.

Each image that’s listed in the srcset comma-separated list can have up to three values associated with it: w, h, and x. The x is pretty clear: that’s the pixel density of the device. The w and h values refer to the width and height of the viewport …but it’s not clear if they mean min-width/height or max-width/height.

If I’m taking a “Mobile First” approach to development, then srcset will meet my needs if w and h refer to min-width and min-height.

In this example, I’ll just use w to keep things simple:

<img src="small.png" srcset="medium.png 600w, large.png 800w">

(Expected behaviour: use small.png unless the viewport is wider than 600 pixels, in which case use medium.png unless the viewport is wider than 800 pixels, in which case use large.png).

If, on the other hand, w and h refer to max-width and max-height, I have to take a “Desktop First” approach:

<img src="large.png" srcset="medium.png 800w, small.png 600w">

(Expected behaviour: use large.png unless the viewport is narrower than 800 pixels, in which case use medium.png unless the viewport is narrower than 600 pixels, in which case use small.png).

One of the advantages of media queries is that, because they support both min- and max- width, they can be used in either use-case: “Mobile First” or “Desktop First”.

Because the srcset syntax will support either min- or max- width (but not both), it will therefore favour one case at the expense of the either.

Both use-cases are valid. Personally, I happen to use the “Mobile First” approach, but that doesn’t mean that other developers shouldn’t be able to take a “Desktop First” approach if they want. By the same logic, I don’t much like the idea of srcset forcing me to take a “Desktop First” approach.

My only alternative, if I want to take a “Mobile First” approach, is to duplicate image paths and declare ludicrous breakpoints:

<img src="small.png" srcset="small.png 600w, medium.png 800w, large.png 99999w">

I hope that this part of the spec offers a way out:

for the purposes of this requirement, omitted width descriptors and height descriptors are considered to have the value “Infinity”

I think that means I should be able to write this:

<img src="small.png" srcset="small.png 600w, medium.png 800w, large.png">

It’s all quite confusing and srcset doesn’t have anything approaching the extensibility of media queries, but I hope we can get it to work somehow.

dConstruct optimisation

When I was helping Bevan with making the dConstruct site, I kept banging on to him about the importance of performance.

Don’t get me wrong: I wanted the site to look great, but I also very much wanted it to feel great …and nothing affects the feel of a site (the user’s experience, if you will) more than performance. As Jason wrote:

If you could only do one thing to prepare your desktop site for mobile and had to choose between employing media queries to make it look good on a mobile device or optimizing the site for performance, you would be better served by making the desktop site blazingly fast.

And yet this fundamental aspect of how performant a site is going to be is all too often left until the development phase. I’d really like to see it taken into account much earlier on, during the UX and visual design phases.

Anyway, as the dConstruct site came together, I just kept asking “What would Steve Souders do?”

For a start, that meant ripping out any boilerplate markup and CSS that was there “just in case.” I very much agree with Rachel when she says stop solving problems you don’t yet have. But one of the areas where the unfortunately-named HTML5 Boilerplate excels is in its suggestions for .htaccess rules so I made sure to rip off the best bits.

Initially jQuery was being included, but given how far browsers have come in their JavaScript support, I was able to ditch it and streamline the JavaScript a bit.

Wherever possible, I made sure that background images in CSS were Base64 encoded as data URIs; icons, textures, and the like. That helped to reduce the number of HTTP requests—one of the easy wins for improving performance.

I’ve already mentioned the conditional loading that’s going on.

Then there’s the thorny issue of responsive images. The dConstruct 2012 site is similar to the dConstruct archive in that there is no correlation between browser width and image: quite often, a smaller image is required for wider screens than for narrower viewports because of the presence of a grid. So instead of trying to come up with some complex interplay of client and server cross-communication to figure out which size image would be appropriate to serve up, I instead took the same approach as I did for the archive site: optimise the hell out of images, regardless of whether they’re going to be viewed in a desktop or a mobile device.

Take a look at the original image of Kevin Slavin compared to the version that appears on the dConstruct archive.

Kevin Slavin Kevin Slavin, retouched

See how everything except the face is so much blurrier in the final version? That isn’t just an attempt to introduce some cool bokeh. It makes for much smaller JPGs with fewer jaggy artefacts. And because human beings tend to focus on other human faces, the technique isn’t really consciously noticeable (although you’ll notice it now that I’ve pointed it out to you).

The design of the 2012 dConstruct site called for monochrome images with colour filters applied.

Ben Hammersley

That turned out to be a fortunate boon for optimising the images. This time we were using PNGs rather than JPGs and we were able to get the number of colours down to 32 or even 16. Run them through Image Optim or Smushit and you can squeeze even more bytes out of them.

The funny thing is that sweating the file sizes of images used to be part and parcel of web development. Back in the nineties, there was something of an aesthetic that grew out of the need to optimise images with limited (web-safe!) colour palettes. That was because bandwidth was at a premium and you could be pretty sure that plenty of people were accessing your site on slow connections.

Well, here we are fifteen years later and thanks to the rise of mobile, bandwidth is once again at a premium and we can be pretty sure that plenty of people are accessing our sites on slow connections. Yet again, mobile is highlighting issues that were always there. When did we get so lazy and decide it was acceptable to send giant unoptimised images down the pipe to our long-suffering visitors?

Mathew Pennell recently wrote:

…it’s certainly true that the golden rule I grew up with – no page should ever be over 100Kb – has long since been mothballed.

But why? That seems like a perfectly good and still-relevant rule to me.

Alas, on the dConstruct site I wasn’t able to hit that target. With an unprimed cache, the home page comes in at around 300K (it’s 17K with a primed cache). By far the largest file is the CSS, weighing at 113K, followed by the web font, Futura bold oblique, at 32K.

By the way, when it comes to analysing performance in the browser, this missing manual for the Webkit inspector is really, really handy. I also ran the site through Google Page Speed but it seems that the user-agent chooses an arbitrary browser width (960? 1024?) so some of the advice about scaling images needs to be taken with a pinch of salt when applied to responsive designs.

I took a look at some other conference sites too. The beautiful site for the Build conference comes in at just under a megabyte for the homepage—it has quite a few fonts and images. It also has a monochrome aesthetic going on so I suspect quite a few of those images could be squeezed down (and some far-future expiry dates would help for repeat visitors).

Then there’s site for this year’s Mobilism conference which is blazingly fast. The combined file size on the homepage isn’t that different to the dConstruct site (although the CSS is significantly smaller) and I suspect there’s some server-side wizardry going on. I’ll have to corner Stephen at the conference next week and quiz him about it.

For now, server-side performance optimisation is something beyond my ken. I should really do something about that, especially as I’m expecting the dConstruct site to take a hammering the day that tickets go sale (May 29th—save the date).

In the meantime, there’s still plenty I can do on the front end. As Bruce put it:

It seems to me that old-fashioned, oh-so-dull techniques might not be ready for retirement yet. You know: well-crafted HTML, keeping JavaScript for progressive enhancement rather than a pre-requisite for the page even displaying, and testing across browsers.

All those optimisation techniques we learned in the 90s—and even wacky ideas like lowsrc—are back in fashion. Everything old is new again.

Image-y nation

There’s a great article by Wilto in the latest edition of A List Apart. It’s called Responsive Images: How they Almost Worked and What We Need.

What all I really like about the article is that it details the the thought process that went into trying working out responsive images for the Boston Globe. Don’t get me wrong: I like it when articles provide code, but I really like it when they provide an insight into how the code was created.

The Filament Group team working on the Boston Globe site were attempting to abide by the two rules of responsive images that I’ve outlined before:

  1. The small image should be default.
  2. Don’t load images twice (in other words, don’t load the small images and the larger images).

There are three reasons for this: performance, performance, performance. As Luke put it so succinctly:

Being a Web designer & not considering speed/performance is like being a print designer & not considering how your colors will print.

That said, I came across a situation recently where loading both images for desktop browsers could actually be a pretty good thing to do.

Wait, wait! Here me out…

Okay, so the way that many of the responsive image techniques work is by means of a cookie. The basic challenge of responsive images is for the client to communicate with the server (and let it know the viewport size) before the server starts sending images. Because cookies can be used both by the client and the server, they offer a way to do that:

  1. As the document begins to load, set a cookie on the client side with JavaScript recording the viewport width.
  2. On the server side, when an image is requested, check for the contents of that cookie and serve up the appropriate image for the viewport size.

There are some variations on this: you could initially route all image requests to send back a 1x1 pixel blank .gif and then, after the page has loaded, use JavaScript to load in the appropriate image for the viewport size.

That’s the theory anyway. As Mat outlined in his article, there’s a bit of a race condition with the cookie being set by the client and the images being sent from the server. New browsers are doing some clever pre-fetching of images. That means they fetch the small images first, violating the second rule of responsive images.

But, like I said, in some situations that might not be so bad…

Josh is working on a responsive project at Clearleft right now—and doing a superb job of it—where he’s deliberately cutting the server-side aspect of responsive images out of the picture. He’s still starting with the small (mobile) images by default and then, after the page has loaded, swaps them out with JavaScript if the viewport is wide enough.

Suppose the small image is 20K and the large image is 60K. That means that desktop browsers are now loading 80K of images (instead of 60). On the face of it, this sounds like really bad news for performance… but because that extra 60K is being downloaded after the page has downloaded, the perceived performance isn’t bad at all. In fact, the experience feels quite snappy. Here’s what happens:

The markup contains the small image as well as some kind of indication where the larger size resides (either in a query string or in a data- attribute):

<img class="photo" src="basestar.jpg" alt="a spiky seed" data-fullsrc="basestar-large.jpg">


That’s about 240 by 180 pixels. Now for the large-screen layout, we want those pictures to be more like 500 by 375 pixels:

@media screen and (min-width: 50em) {
    .photo {
        width: 500px;
        height: 375px;

That results in a “blown up” pixely image.


Once the page has loaded, that small image is swapped out for the larger image specified in the data- attribute.


Large-screen browsers have now downloaded 20K more than they actually needed but the perceived performance of the page was actually pretty snappy:

  1. Blown-up pixely images act as placeholders while the page is downloading.
  2. Once the page has loaded, the full-sized images snap into place.

Does that sound familiar? This is exactly what the lowsrc attribute did.

I’m probably showing my age by even acknowledging the existence of lowsrc. It was a proprietary attribute created by Netscape back in the days of universally scarce bandwidth:

<IMG SRC=basestar.jpg LOWSRC=low-basestar.jpg ALT="a spiky seed">

(See how I’m using unquoted attributes and uppercase tags and attributes for added nostalgic value?)

The lowsrc value would usually be a monochrome version of the image in the src attribute.

a spiky seed in black and white

And we only had 256 colours to play with. You tell that to the web developers today …they wouldn’t believe you.

Seriously though, it’s funny how problems from the early days of the web have a habit of resurfacing. I remember when Ajax was getting popular, all the problems associated with frames rose from the grave: bookmarking, breaking the back button, etc. Now that we’re in a time of small-screen devices on low-bandwidth networks, we’re rediscovering a lot of the same issues we had when we were developing for 640 pixel wide screens with 28K or 56K modems.

Ultimately, I think that what the great brainstorming around fixing the problems with the img element shows is a fundamental impedance mismatch between the fluid nature of the web and the fixed pixel-based nature of bitmap images. We’ve got ems for setting type and percentages for specifying the proportions of our grids, but when it comes to photographic images, all we’ve got is the pixel—a unit that makes less and less sense every day.

Responsible responsive images

I’m in Belfast right now for this year’s Build conference, so I am. I spent yesterday leading a workshop on —the marriage of responsive design with progressive enhancement; a content-first approach to web design.

I spent a chunk of time in the afternoon going over the thorny challenges of responsive images. Jason has been doing a great job of rounding up all the options available to you when it comes to implementing responsive images:

  1. Responsive IMGs, Part 1,
  2. Responsive IMGs, Part 2—an in-depth look at techniques,
  3. Responsive IMGs, Part 3—the future of the img element.

Personally, I have two golden rules in mind when it comes to choosing a responsive image technique for a particular project:

  1. The small image should be default.
  2. Don’t load images twice (in other words, don’t load the small images and the larger images).

That first guideline simply stems from the mobile-first approach: instead of thinking of the desktop experience as the default, I’m assuming that people are using small screen, narrow bandwidth devices until proven otherwise.

Assuming a small-screen device by default, the problem is now how to swap out the small images for larger images on wider viewports …without downloading both images.

I like Mark’s simplified version of Scott’s original responsive image technique and I also like Andy’s contextual responsive images technique. They all share a common starting point: setting a cookie with JavaScript before any images have started loading. Then the cookie can be read on the server side to send the appropriate image (and remember, because the default is to assume a smaller screen, if JavaScript isn’t available the browser is given the safer fallback of small images).

Yoav Weiss has been doing some research into preloaders, cookies and race conditions in browsers and found out that in some situations, it’s possible that images will begin to download before the JavaScript in the head of the document has a chance to set the cookie. This means that in some cases, on first visiting a page, desktop browsers like IE9 might begin get the small images instead of the larger images, thereby violating the second rule (though, again, mobile browsers will always get the smaller images, never the larger images).

Yoav concludes:

Different browsers act differently with regard to which resources they download before/after the head scripts are done loading and running. Furthermore, that behavior is not defined in any spec, and may change with every new release. We cannot and should not count on it.

The solution seems clear: we need to standardise on browser download behaviour …which is exactly what the HTML standard is doing (along with standardising error handling).

That’s why I was surprised by Jason’s conclusion that device detection is the future-friendly img option.

Don’t get me wrong: using a service like Sencha.io SRC (formerly TinySRC)—which relies on user-agent sniffing and a device library lookup—is a perfectly reasonable solution for responsive images …for now. But I wouldn’t call it future friendly; quite the opposite. If anything, it might be the most present-friendly technique.

One issue with relying on user-agent sniffing is the danger of false positives: a tablet may get incorrectly identified as a mobile phone, a mobile browser may get incorrectly identified as a desktop browser and so on. But those are edge cases and they’re actually few and far between …for now.

The bigger issue with relying on user-agent sniffing is that you are then entering into an arms race. You can’t just plug in a device library and forget about it. The library must be constantly maintained and kept up to date. Given the almost-exponential expansion of the device and browser landscape, that’s going to get harder and harder.

Disruption will only accelerate. The quantity and diversity of connected devices—many of which we haven’t imagined yet—will explode, as will the quantity and diversity of the people around the world who use them. Our existing standards, workflows, and infrastructure won’t hold up. Today’s onslaught of devices is already pushing them to the breaking point. They can’t withstand what’s ahead.

So while I consider user-agent sniffing to be an acceptable short-term solution, I don’t think it can scale to the future onslaught—not to mention the tricky issue of the licensing landscape around device libraries.

There’s another reason why I tend to steer clear of device libraries like WURFL and Device Atlas. When you consider the way that I’m approaching responsive images, those libraries are over-engineered. They contain a massive list of mobile user-agent strings that I’ll never need. Remember, I’m taking a mobile-first approach and assuming a mobile browser by default. So if I’m going to overturn that assumption, all I need is a list of desktop user-agent strings. That’s a much less ambitious undertaking. Such a library wouldn’t need to kept updated quite as often as a mobile device listing.

Anybody fancy putting it together?

The good new days

I’m continually struck by a sense of web design deja vu these days. After many years of pretty dull stagnation, things are moving at a fast clip once again. It reminds of the web standards years at the beginning of the century—and not just because HTML5 Doctor has revived Dan’s excellent Simplequiz format.

Back then, there was a great spirit of experimentation with CSS. Inevitably the experimentation started on personal sites—blogs and portfolios—but before long that spirit found its way into the mainstream with big relaunches like ESPN, Wired, Fast Company and so on. Now I’m seeing the same transition happening with responsive web design and, funnily enough, I’m seeing lots of the same questions popping up:

  • How do we convince the client?
  • How do we deal with ad providers?
  • How will the CMS cope with this new approach?

Those are tricky questions but I’m confident that they can be answered. The reason I feel so confident is that there are such smart people working on this new frontier.

Just as we once gratefully received techniques like Dave’s CSS sprites and Doug’s sliding doors, now we have new problems to solve in fiendishly clever ways. The difference is that we now have Github.

Here’s a case in point: responsive images. Scaling images is all well and good but beyond a certain point it becomes overkill. How do we ensure that we’re serving up appropriately-sized images to various screen widths?

Scott kicked things off with his original code, a clever mixture of JavaScript, cookies, .htaccess rules and the -data HTML5 attribute prefix. Crucially, this technique is using progressive enhancement: the smaller image is the default; the larger image only gets swapped in when the screen width is wide enough. Update: and Scott has just updated the code to remove the -data-fullsrc usage.

Mark was able to take Scott’s code and fork it to come up with his own variation which uses less JavaScript.

Andy added his own twist on the technique by coming up with a slightly different solution: instead of looking at the width of the screen, take a look a look at the width of the element that contains the image. Basically, if you’re using percentages to scale your images anyway, you can compare the offsetWidth of the image to the declared width of the image and if it’s larger, swap in a larger image. He has written up this technique and you can see it in action on the holding page for this September’s Brighton Digital Festival.

I particularly like Andy’s Content First approach. The result is that sometimes a large screen width might mean you actually want smaller images (because the images will appear within grid columns) whereas a smaller screen, like maybe a tablet, might get the larger images (if the content is linearised, for example). So it isn’t the width of the viewport that matters; it’s the context within which the image is appearing.

All three approaches are equally valuable. The technique you choose will depend on your own content and the specific kind of problem you are trying to solve.

The Mobile Safari orientation and scale bug is another good example of a crunchy problem that smart people like Shi Chuan and Mathias Bynens can tackle using the interplay of blogs, Github and to a lesser extent, Twitter. I just love seeing the interplay of ideas that cross-pollinate between these clever-clogged geeks.