Tags: process

22

sparkline

Design sprinting

James and I went to Ipswich last week for work. But this wasn’t part of an ongoing project—this was a short intense one-week feasibility study.

Leon from Suffolk Libraries got in touch with us about a project they’re planning to carry out soon: replacing their self-service machines with something more up-to-date. But rather than dive into commissioning the project straight away, he wisely decided to start with a one-week sprint to figure out exactly what the project would need to go ahead.

So that’s what James and I did. It was somewhat similar to the design sprint popularised by GV. We ensconced ourselves in the Ipswich library and packed a whole lot of work into five days. There was lots of collaboration, lots of sketching, lots of iterative design, and some rough’n’ready code. It was challenging, but a lot of fun. Also: we stayed in a pretty sweet AirBnB.

Our home for the week. This is a nice AirBnB.

You can read all about it in our case study. You can also read all about from Leon’s point of view on his blog:

I can’t recommend this kind of research sprint enough. We got a report, detailed technical validation of an idea, mock ups and a plan for how to proceed, while getting staff and stakeholders involved in the project – all in the space of 5 days.

I think this approach makes a lot of sense. By the end of the week, James and I felt pretty confident about estimating times and costs for the full project. Normally trying to estimate that kind of thing can be a real guessing game. But with the small of investment of one week’s worth of effort, you get a whole lot more certainty and confidence.

Have a look for yourself.

Where to start?

A lot of the talks at this year’s Chrome Dev Summit were about progressive web apps. This makes me happy. But I think the focus is perhaps a bit too much on the “app” part on not enough on “progressive”.

What I mean is that there’s an inevitable tendency to focus on technologies—Service Workers, HTTPS, manifest files—and not so much on the approach. That’s understandable. The technologies are concrete, demonstrable things, whereas approaches, mindsets, and processes are far more nebulous in comparison.

Still, I think that the most important facet of building a robust, resilient website is how you approach building it rather than what you build it with.

Many of the progressive app demos use server-side and client-side rendering, which is great …but that aspect tends to get glossed over:

Browsers without service worker support should always be served a fall-back experience. In our demo, we fall back to basic static server-side rendering, but this is only one of many options.

I think it’s vital to not think in terms of older browsers “falling back” but to think in terms of newer browsers getting a turbo-boost. That may sound like a nit-picky semantic subtlety, but it’s actually a radical difference in mindset.

Many of the arguments I’ve heard against progressive enhancement—like Tom’s presentation at Responsive Field Day—talk about the burdensome overhead of having to bolt on functionality for older or less-capable browsers (even Jake has done this). But the whole point of progressive enhancement is that you start with the simplest possible functionality for the greatest number of users. If anything gets bolted on, it’s the more advanced functionality for the newer or more capable browsers.

So if your conception of progressive enhancement is that it’s an added extra, I think you really need to turn that thinking around. And that’s hard. It’s hard because you need to rewire some well-engrained pathways.

There is some precedence for this though. It was really, really hard to convince people to stop using tables for layout and starting using CSS instead. That was a tall order—completely change the way you approach building on the web. But eventually we got there.

When Ethan came out with Responsive Web Design, it was an equally difficult pill to swallow, not because of the technologies involved—media queries, percentages, etc.—but because of the change in thinking that was required. But eventually we got there.

These kinds of fundamental changes are inevitably painful …at first. After years of building websites using tables for layout, creating your first CSS-based layout was demoralisingly difficult. But the second time was a bit easier. And the third time, easier still. Until eventually it just became normal.

Likewise with responsive design. After years of building fixed-width websites, trying to build in a fluid, flexible way was frustratingly hard. But the second time wasn’t quite as hard. And the third time …well, eventually it just became normal.

So if you’re used to thinking of the all-singing, all-dancing version of your site as the starting point, it’s going to be really, really hard to instead start by building the most basic, accessible version first and then work up to the all-singing, all-dancing version …at first. But eventually it will just become normal.

For now, though, it’s going to take work.

The recent redesign of Google+ is true case study in building a performant, responsive, progressive site:

With server-side rendering we make sure that the user can begin reading as soon as the HTML is loaded, and no JavaScript needs to run in order to update the contents of the page. Once the page is loaded and the user clicks on a link, we do not want to perform a full round-trip to render everything again. This is where client-side rendering becomes important — we just need to fetch the data and the templates, and render the new page on the client. This involves lots of tradeoffs; so we used a framework that makes server-side and client-side rendering easy without the downside of having to implement everything twice — on the server and on the client.

This took work. Had they chosen to rely on client-side rendering alone, they could have built something quicker. But I think it was worth laying that solid foundation. And the next time they need to build something this way, it’s going to be less work. Eventually it just becomes normal.

But it all starts with thinking of the server-side rendering as the default. Server-side rendering is not a fallback; client-side rendering is an enhancement.

That’s exactly the kind of mindset that enables Jack Franklin to build robust, resilient websites:

Now we’ll build the React application entirely on the server, before adding the client-side JavaScript right at the end.

I had a chance to chat briefly with Jack at the Edge conference in London and I congratulated him on the launch of a Go Cardless site that used exactly this technique. He told me that the decision to flip the switch and make it act as a single page app came right at the end of the project. Server-side rendering was the default; client-side rendering was added later.

The key to building modern, resilient, progressive sites doesn’t lie in browser technologies or frameworks; it lies in how we think about the task at hand; how we approach building from the ground up rather than the top down. Changing the way we fundamentally think about building for the web is inevitably going to be challenging …at first. But it will also be immensely rewarding.

Polyfills and products

I was chatting about polyfills recently with Bruce and Remy—who coined the term:

A polyfill, or polyfiller, is a piece of code (or plugin) that provides the technology that you, the developer, expect the browser to provide natively. Flattening the API landscape if you will.

I mentioned that I think that one of the earliest examples of what we would today call a polyfill was the IE7 script by Dean Edwards.

Dean wrote this (amazing) piece of JavaScript back when Internet Explorer 6 was king of the hill and Microsoft had stopped development of their browser entirely. It was a pretty shitty time in browserland back then. While other browsers were steaming ahead with browser support, Dean’s script pulled IE6 up by its bootstraps and made it understand CSS2.1 features. Crucially, you didn’t have to write your CSS any differently for the IE7 script to work—the classic hallmark of a polyfill.

Scott has a great post over on the Filament Group blog asking To Picturefill, or not to Picturefill?. Therein, he raises the larger issue of when to use polyfills of any kind. After all, every polyfill you use is a little bit of a tax that the end user must pay with a download.

Polyfills typically come at a cost to users as well, since they require users to download and execute JavaScript in order to work. Sometimes, frequently even, that cost outweighs the benefits that the polyfill would bring. For that reason, the question of whether or not to use any polyfill should be taken seriously.

Scott takes a very thoughtful approach to using any polyfill, and I try to do the same. I feel that it’s important to have an exit strategy for every polyfill you decide to use. After all, the whole point of a polyfill is that it’s a stop-gap measure until a particular feature is more widely supported.

And that’s where I run into one of the issues of working at an agency. At Clearleft, our time working with a client usually lasts a few months. At the end of that time, we’ll have delivered whatever the client needs: sometimes that’s design work; sometimes its design and a front-end pattern library.

Every now and then we get to revisit a project—like with Code for America—but that’s the exception rather than the rule. We’ve had to get very, very good at handover precisely because we won’t be the ones maintaining the code that we deliver (though we always try to budget in time to revisit the developers who are working with the code to answer any questions they might have).

That makes it very tricky to include a polyfill in our deliverables. We’d need to figure out a way of also including a timeline for revisiting that polyfill and evaluating when it’s time to drop it. That’s not an impossible task, but it’s much, much easier if you’re a developer working on a product (as opposed to a developer working at an agency). If you’re going to be the same person working on the code in the future—as well as working on it right now—it gets a lot easier to plan for evaluating polyfill usage further down the line. Set a recurring item in your calendar and you should be all set.

It’s a similar situation with vendor prefixes. Vendor prefixes were never intended to be a long-lasting part of any style sheet. Like polyfills, they’re supposed to be used with an exit strategy in mind: when the time is right, remove the prefixed styles, leaving only the unprefixed standardised CSS. Again, that’s a lot easier to do if you’re working on a product and you know that you’ll be the one revisiting the CSS later on. That’s harder to do at an agency where you’re handing over CSS to someone else.

I’m quite reluctant to use any vendor prefixes at all—which is at should be; vendor prefixes should not be used lightly. Sometimes they’re unavoidable, but that shouldn’t stop us thinking about how to remove them at a later date.

I’m mostly just thinking out loud here. I guess my point is that certain front-end development techniques and technologies feel like they’re better suited to product work rather than agency work. Although I’m sure there are plenty of counter-examples out there too of tools that really fit the agency model and are less useful for working on the same product over a long period.

But even though the agency world and the product world are very different in lots of ways, both of them require us to think about the future. How will long will the code you’re writing today last? And do you have a plan for when it needs updating or replacing?

Responding

Last week I had responsive-themed tour of London.

On Tuesday I went up to Chelsea to spend the day workshopping with some people at Education First. It all went rather splendidly, I’m happy to report.

It was an interesting place. First of all, there’s the office building itself. Once owned by News International, it has a nice balance between open-plan and grouped areas. Then there’s the people. Just 20% of them are native English speakers. It was really nice to be in such a diverse group.

The workshop attendees represented a good mix of skills too: UX, front-end development, and visual design were at the forefront, but project management and content writing were also represented. That made the exercises we did together very rewarding.

I was particularly happy that the workshop wasn’t just attended by developers or designers, seeing as one of the messages I was hammering home all day was that responsive web design affects everyone at every stage of a project:

Y’see, it’s my experience that the biggest challenges of responsive design (which, let’s face it, now means web design) are not technology problems. Sure, we’ve got some wicked problems when dealing with non-flexible media like bitmap images, which fight against the flexible nature of the web, but thanks to the work of some very smart and talented people, even those kinds of issues are manageable.

No, the biggest challenges, in my experience, are to do with people. Specifically, the way that people work together.

On Thursday evening, I reiterated that point at The Digital Pond event in Islington …leading at least one person in the audience to declare that they were having an existential crisis (not my intention, honest).

I also had the pleasure of hearing Sally give her take on responsive design. She was terrific at Responsive Day Out 2 and she was, of course, terrific here again. If you get the chance to see her speak, take it.

There should be videos from Digital Pond available at some point, so you’ll be able to catch up with our talks then.

Notes from the edge

I went up to London for the Edge Conference on Friday. It’s not your typical conference. Instead of talks, there are panels, but not the crap kind, where nobody says anything of interest: these panels are ruthlessly curated and prepared. There’s lots of audience interaction too, but again, not the crap kind, where one or two people dominate the discussion with their own pet topics: questions are submitted ahead of time, and then you are called upon to ask it at the right moment. It’s like Question Time for the web.

Components

The first panel was on that hottest of topics: Web Components. Peter Gasston kicked it off with a superb introduction to the subject. Have a read of his equally-excellent article in Smashing Magazine to get the gist.

Needless to say, this panel covered similar ground to the TAG meetup I attended a little while back, and left me with similar feelings: I’m equal parts excited and nervous; optimistic and worried. If Web Components work out, and we get a kind emergent semantics of UI widgets, it’ll be a huge leap forward for the web. But if we end up with a Tower of Babel, things could get very messy indeed. We’ll probably get both at once. And I think that’ll be (mostly) okay.

I butted into the discussion when the topic of accessibility came up. I was a little worried about what I was hearing, which was mainly, “Oh, ARIA takes care of the accesibility.” I felt like Web Components were passing the buck to ARIA, which would be fine if it weren’t for the fact that ARIA can’t cover all the possible use-cases of Web Components.

I chatted about this with Derek and Nicole during the break, but I’m not sure if I was articulating my thoughts very well, so I’ll have another stab at it here:

Let me set the scene for Web Components…

Historically, HTML has had a limited vocubalary for expressing interface widgets—mostly a bunch of specialised form fields like, say, the select element. The plus side is that there’s a consensus of understanding among the browsers, so you don’t have to explain what a select element does; the browsers already know. The downside is that whenever we want to add a new interface element like input type="range", it takes time to get into browsers and through the standards process. Web Components allow you to conjure up interface elements, and you don’t have to lobby browser makers or standards groups in order to make browsers understand your newly-minted element: you provide all the behavioural and styling instructions in one bundle.

So Web Components make use of HTML, JavaScript, and (scoped) CSS. The possibility space for the HTML is infinite: if you need an element that doesn’t exist, you just invent it. The possibility space for the JavaScript is pretty close to infinite: it’s a Turing-complete language that can be wrangled to do just about anything. The possibility space for CSS isn’t infinite, but it’s pretty darn big: there’s not much you can’t do with it at this point.

What’s missing from that bundle of HTML, JavaScript, and CSS are hooks for assistive technology. Up until now, this is something we’ve mostly left to the browser. We don’t have to include any hooks for assistive technology when we use a select element because the browser knows what it is and can expose that knowledge to the assistive technology. If we’re going to start making up our own interface elements, we now have to take on the responsibility of providing that information to assistive technology.

How do we that? Well, right now, our only option is to use ARIA …but the possibility space defined by ARIA is much, much smaller than HTML, JavaScript, or CSS.

That’s not a criticism of ARIA: that’s the way it was designed. It’s a reactionary technology, designed to plug the gaps where the native semantics of HTML just don’t cut it. The vocabulary of ARIA was created by looking at the kinds of interface elements people are making—tabs, sliders, and so on. That’s fine, but it can’t scale to keep pace with Web Components.

The problem that Web Components solve—the fact that it currently takes too long to get a new interface element into browsers—doesn’t have a corresponding solution when it comes to accessibility hooks. Just adding more and more predefined ARIA roles won’t cut it—we need some kind of extensible accessibility that matches the expressive power of Web Components. We don’t need a bigger vocabulary in ARIA, we need a way to define our own vocabulary—an extensible ARIA, if you will.

Hmmm… I’m still not sure I’m explaining myself very well.

Anyway, I just want to make sure that accessibility doesn’t get left behind (again!) in our rush to create a new solution to our current problems. With Web Components still in their infancy, this feels like the right time to raise these concerns.

That highlights another issue, one that Nicole picked up on. It’s really important that the extensible web community and the accessibility community talk to each other.

Frankly, the accessibility community can be its own worst enemy sometimes. So don’t get me wrong: I’m not bringing up my concerns about the accessibility of Web Components in order to cry “fail!”—I just want to make sure that it’s on the table (and I’m glad that Alex is one of the people driving Web Components—his history with Dojo reassures me that we can push the boundaries of interface widgets on the web without leaving accessibility behind).

Anyway …that’s enough about that. I haven’t mentioned all the other great discussions that took place at Edge Conference.

Developer Tooling

The Web Components panel was followed by a panel on developer tools. This was dominated by representatives from different browsers, each touting their own set of in-browser tools. But the person who I really wanted to rally behind was Kenneth Auchenberg. He quite rightly asks why our developer tools and our text editors are two different apps. And rather than try to put text editors into developer tools, what we really want is to pull developer tools into our text editors …all the developer tools from all the browsers, not just one set of developer tools from one specific browser.

If you haven’t seen Kenneth’s presentation from Full Frontal, I urge you to watch it or listen to it.

I had my hand up to jump into the discussion towards the end, but time ran out so I didn’t get a chance. Paul came over afterwards and asked what I was going to say. Here’s what I told him…

I’m fascinated by the social dynamics around how browsers get made. This is an area where different companies are simultaneously collaborating and competing.

Broadly speaking, the feature set of a web browser can be divided into two buckets:

In one bucket, you’ve got the support for standards like HTML, CSS, JavaScript. Now, individual browsers might compete on how quickly or how thoroughly they get those standards implemented, but at this point, there’s no disagreement about the fact that proprietary crap is bad, standards are good, and that no matter how painful the process can be, browser makers all need to get together and work on standards together. Heck, even Apple can’t avoid collaborating on this stuff.

In the other bucket, you’ve got all the stuff that browsers compete against each other with: speed, security, the user interface, etc. A lot of this takes place behind closed doors, and that’s fine. There’s no real need for browser makers to collaborate on this stuff, and it could even hurt their competetive advantage if they did collaborate.

But here’s the problem; developer tools seem to be coming out of that second bucket instead of the first. There doesn’t seem to be much communication between the browser makers on developer tools. That’s fine if you see developer tools as an opportunity for competition, but it’s lousy if you see developer tools as an opportunity for interoperability.

This is why Kenneth’s work is so important. He’s crying out for more interoperability between browsers when it comes to developer tools. Why can’t they all use the same low-level APIs under the hood? Then they can still compete on how pretty their dev tools look, without making life miserable for developers who want to move quickly between browsers.

As painful as it might be, I think that browser makers should get together in some semi-formalised way to standardise this stuff. I don’t think that the W3C or the WHATWG are necessarily the right places for this kind of standardisation, but any kind of official cooperation would be good.

Build Process

The panel on build processes for front-end development kicked off with Gareth saying a few words. Some of those words included the sentence:

Make is probably older than you.

Cue glares from me and Scott.

Gareth also said that making websites means making software. We’re all making software—live with it.

This made me nervous. I’ve always felt that one of the great strengths of the web has been its low barrier to entry. The idea of a web that can only be made by qualified software developers doesn’t sound like a good thing to me.

Fortunately, things got cleared up later on. Somebody else asked a question about whether the barrier to entry was being raised by the complexity of tools like preprocessors, compilers, and transpilers. The consensus of the panel was that these are power tools for power users. So if someone were learning to make a website from scratch, you wouldn’t start them off with, say, Sass, without first learning CSS.

It was a fun panel, made particulary enjoyable by the presence of Kyle Simpson. I like the cut of his jib. Alas, I didn’t get the chance to tell him that in person. I had to duck out of the afternoon’s panels to get back to Brighton due to unforeseen family circumstances. But I did manage to catch some of the later panels on the live stream.

Closing thoughts

A common thread I noticed amongst many of the panels was a strong bias for decantralisation, rather than collaboration. That was most evident with Web Components—the whole point is that you can make up your own particular solution rather than waiting for a standards body. But it was also evident in the Developer Tools line-up, where each browser maker is reinventing the same wheels. And when it came to Build Process, it struck me that everyone is scratching their own itch instead of getting together to work on an itch solution.

There’s nothing wrong with that kind of Darwinian approach to solving our problems, but it does seem a bit wasteful. Mairead Buchan was at Edge Conference too and she noticed the same trend. Sounds like she’s going to do something about it too.

Launching for America

I’ve already written a bit about the process of working with Code for America, which has been an absolute pleasure. Just today, Jon described it as “the closest thing to a dream project.”

I concur. Not only did the client communication work out really well, but their willingness to share the pattern library we put together warms the cockles of my heart.

When Clearleft’s part in the project officially wrapped up, I wrote:

It’ll be a while yet before the new site rolls out.

That was exactly one month ago.

The new Code for America website went live last Friday.

I’m impressed! That’s a pretty short timescale to rebuild a fairly large website, not only changing the front-end codebase, but also switching out the back-end stack as well. They must’ve been working flat out.

I’ve worked on projects in the past where my initial excitement at the project’s wrap diminished as the site launch date slipped further and further over the horizon of the future. It isn’t unusual to have a gap of many months between the end of Clearleft’s time on a project and seeing the site go live. I’m really happy that the Code for America project bucks that trend.

Climbing Mount Responsive

I’m back from Munich, where I spent three solid days workshopping with AutoScout24. I’m happy to report that it went really, really well. It’s restored my confidence after the negative feedback I got in Tel Aviv.

Three days is quite a long time to spend workshopping, so I was mostly winging it. But that extended period also allowed us to dive deep into specific issues and questions (all the usual suspects: how to handle navigation, images, complex interactions, etc.).

The real issues, however, were much more “bigger picture”—how to handle the transition to responsive of a big desktop-centric site that’s been growing for over a decade. By the end of the three days, we had divided the options into three groups:

  1. Start making any new pages and sections of the site responsive. After a while of doing that, the team would develop a pretty good feeling of what it would take to then go back and retrofit what’s already online. The downside of this approach is that would provide an asynchronous user experience: users would be moving from responsive to non-responsive parts of the site, which could be confusing.
  2. Leave the current fixed-width grid as it is, but focus on making all the components of the page flexible. Once all the components are fluid, then it should be a matter of switching over to a fluid grid in one fell swoop. On the plus side, this means that the whole site would then be responsive. On the negative side, until all the components have been made flexible—which could take some time—the site remains rigidly fixed-width and desktop-centric.
  3. Rebuild the mobile site, using it as a seed from which to grow a new responsive site. On the face of it, having a separate mobile subdomain might seem like a millstone around your neck if your trying to push for a responsive design. In practice though, it can be enormously useful. Mostly it’s a political issue: whereas ripping out the desktop site and starting from scratch is a huge task that would require everyone’s buy-in, nobody gives a shit about the mobile subdomain. Both the BBC news team and The Guardian are having great success with this approach, building mobile-first responsive sites bit-by-bit on the m. subdomain, with the plan to one day flip the switch and make the subdomain the main site. The downside is that until the switch is flipped, you’ve still got to deal with redirecting mobile traffic—probably using some nasty user-agent sniffing—and all the issues that come with having your content appearing at more than one URL.

There’s no doubt about it: trying to apply responsive design to large-scale existing desktop-centric sites is really, really hard. The message I keep repeating in my workshops is that you can’t expect to just sprinkle on some magic media-query fairydust—it just doesn’t work that way. Instead, you’ve got to figure out a way to reframe all your challenges into a mobile-first way of thinking.

Instead of asking “How can I make these patterns (mega-menus, lightboxes, complex data tables) work when the screen size shrinks?”, you need to ask “What’s the problem they’re supposed to be solving, and how would I design a solution for the small screen to start with?” Once you’ve done that, then it becomes a matter of scaling up to the large screen …which is actually a much simpler problem space.

As is so often the case with web design, it requires the application of progressive enhancement. In the case of responsive design, that means starting with small-screen styles, small-screen images, and small-screen content priority. Then you can progressively enhance with layout styles, larger images, and conditional loading of nice-to-have extra content. Oh, and you absolutely have to accept and embrace the fact that websites do not need to look the same in every browser.

Making that change in thinking can be hugely challenging.

Remember when we were all making websites with tables for layout? Then the web standards movement came along, pushing for the separation of structure and presenation, urging us to use CSS for layout. It took the brain-rewiring power of the CSS Zen Garden to really give people that “A-ha!” moment.

Mobile-first responsive design requires a similar rewiring of the brain. And if you’re used to doing things a certain way, then it’s natural to resist such drastic change—although as Elliot pointed out at the Responsive Day Out, when you first make the switch it might be very tricky, but it gets easier and easier with each project.

Still, it can be a difficult message to hear. I suspect that’s why my workshop in Tel Aviv wasn’t so warmly received—I didn’t provide any easy answers.

The designers and developers at AutoScout24 also didn’t find it easy to accept how much they’d have to rethink their approach, but by the end of the three days they had a much clearer idea of how they could go about making that change. I’m really curious to see where they’ll go from here. Personally, I’m very optimistic about their prospects for successfully pulling off a large-scale responsive relaunch.

There are two main reasons for my optimism:

  1. They’ve already put together a front-end styleguide; a UI library of components. The fact that they’re already thinking about breaking things down into their component parts is a terrific approach (and they also said they’re planning to make their UI library public, which makes me very happy indeed).
  2. Developers, designers, and information architects work side by side. The web department works in teams, but those teams aren’t organised by job role. Instead each small team of 4 or 5 people has a product manager, a UX designer, visual designer, and a developer or two.

I can’t emphasise enough how important that kind of collaborative environment is.

I’ve said it before, and I’ll say it again; the biggest challenges of responsive design are not technology problems.:

No, the biggest challenges, in my experience, are to do with people. Specifically, the way that people work together.

I’ve spoken to some companies who were eager to make the switch to responsive design, but who have designers and developers sitting in different rooms, or on different floors, or buildings, or even countries. That’s when my heart sinks. Trying to work in the iterative way that a good responsive project demands is going to be massively difficult—if not downright impossible—in that environment.

So I’m pretty confident that if the designers and developers at AutoScout24 put their minds to it, they can rise to the enormous challenge that lies ahead of them. They’ve got the right working environment, they’ve got a UI library, and they’ve got the option of using their exising mobile subdomain. Most of all, they’ve demonstrated a willingness to accept all the challenges that come with changing from a desktop-centric to a content-first mindset.

All in all, it was a very productive three days in Munich. It was hard work, but then again, I had the option of rewarding myself with some excellent Bavarian food and beer each evening.

Abendessen

Communication for America

Mandy has written a great article about making remote teams work. It’s an oft-neglected aspect of working on a product when you’ve got people distributed geographically.

But remote communication isn’t just something that’s important for startups and product companies—it’s equally important for agencies when it comes to client communication.

At Clearleft, we occasionally work with clients right here in Brighton, but that’s the exception. More often than not, the clients are based in London, or somewhere else in the UK. In the case of Code for America, they’re based in San Francisco—that’s eight or nine timezones away (depending on the time of year).

As it turned out, it wasn’t a problem at all. In fact, it worked out nicely. At the end of every day, we had a quick conference call, with two or three people at our end, and two or three people at their end. For us, it was the end of the day: 5:30pm. For them, the day was just starting: 9:30am.

We’d go through what we had been doing during that day, ask any questions that had cropped up over the course of the day, and let them know if there was anything we needed from them. If there was anything we needed from them, they had the whole day to put it together while we went home. The next morning (from our perspective), it would be waiting in our in/drop-boxes.

Meanwhile, from the perspective of Code for America, they were coming into the office every morning and starting the day with a look over our work, as though we had beavering away throughout the night.

Now, it would be easy for me to extrapolate from this that this way of working is great and everyone should do it. But actually, the whole timezone difference was a red herring. The real reason why the communication worked so well throughout the project was because of the people involved.

Right from the start, it was clear that because of time and budget constraints that we’d have to move fast. We wouldn’t have the luxury of debating everything in detail and getting every decision signed off. Instead we had a sort of “rough consensus and running code” approach that worked really well. It worked because everyone understood that was what was happening—if just one person was expecting a more formalised structure, I’m sure it wouldn’t have gone quite so smoothly.

So we provided materials in whatever level of fidelity made sense for the idea under discussion. Sometimes that was a quick sketch. Sometimes it was a fairly high-fidelity mockup. Sometimes it was a module of markup and CSS. Whatever it took.

Most of all, there was a great feeling of trust on both sides of the equation. It was clear right from the start that the people at Code for America were super-smart and weren’t going to make any outlandish or unreasonable requests of Clearleft. Instead they gave us just the right amount of guidance and constraints, while trusting us to make good decisions.

At one point, Jon was almost complaining about not getting pushback on his designs. A nice complaint to have.

Because of the daily transatlantic “stand up” via teleconference, there was a great feeling of inevitability to the project as it came together from idea to execution. Inevitability doesn’t sound like a very sexy attribute of a web project, but it’s far preferable to the kind of project that involves milestones of “big reveals”—the Mad Men approach to project management.

Oh, and we made sure that we kept those transatlantic calls nice and short. They never lasted longer than 10 or 15 minutes. We wanted to avoid the many pitfalls of conference calls.

Pattern sharing

Mike has written about the Code for America alpha website that we collaborated on:

We chose to work with ClearLeft because they develop a pattern portfolio (a pattern/style library) which would allow us to scale our work to our Brigades. This unique approach has aligned perfectly with our work style and decentralized organizational structure.

Thankfully, I think the approach of delivering a pattern portfolio (instead of just pages) isn’t so unique these days. Mind you, it still seems to be more common with in-house teams than agencies. The Mailchimp pattern library is a classic example.

But agencies like Paravel are—like Clearleft—delivering systems, not pages. Dave wrote about providing responsive deliverables:

Responsive deliverables should look a lot like fully-functioning Twitter Bootstrap-style systems custom tailored for your clients’ needs.

I think that’s a good way of looking at it: a Bootstrap for every project.

Here’s the front-end style guide for Code for America.

Usually these front-end deliverables will be password-protected on the Clearleft extranet for the client’s eyes only, but Code for America are all about openness, so they’re more than willing to let us share it with the world. That makes me very happy. I remember encouraging the guys at Starbucks to publish their front-end style guide and I’ve written about this spirit of sharing before:

These style guides and pattern libraries aren’t being published in an attempt to provide ready-made solutions—every project should have its own distinct pattern library. Instead, these pattern libraries are being published in a spirit of openness and sharing …a way of saying “Hey, this is what worked for us in these particular circumstances.”

If you’re poking around the Code for America style guide, you’ll notice that it borrows some ideas from the pattern primer idea I published a while back. But in this iteration, the markup is available via a toggle—a nice variation. There’s also a patchwork page that provides a nice glance-able uninterrupted view of the same patterns.

Every project is a learning experience and each front-end style guide gives us ideas about how to do the next one better. In fact, Mark is busy working on better internal tools for creating these kinds of deliverables—something we’ll definitely be sharing. In the meantime, I’ll be encouraging other clients to be as open as Code for America have been in allowing us to share these deliverables.

For more on the usefulness of front-end style guides, be sure to read Paul’s article on style guides for the web, Anna’s classic 24 Ways article, and of course, Anna’s pocket guide from Five Simple Steps.

Coding for America

Back when I was wandering around America in August, I mentioned that I met up with Mike Migurski in San Francisco:

I played truant from UX Week this morning to meet up with Mike for a coffee and a chat at Cafe Vega. We were turfed out when the bearded, baseball-capped, Draplinesque barista announced he had to shut the doors because he needed to “run out for some milk.” So we went around the corner to the Code For America office.

It wasn’t just a social visit. Mike wanted to chat about the possibility of working with Clearleft. The Code for America site was being overhauled. The new site needed to communicate directly with volunteers, rather than simply being a description of what Code for America does. But the site also needed to be able to change and adapt as the organisation’s activities expanded. So what they needed was not a set of page designs; they needed a system of modular components that could be assembled in a variety of ways.

This was music to my ears. This sort of systems-thinking is exactly the kind of work that Clearleft likes to get its teeth into. I showed Mike some of the previous work we had done in creating pattern libraries, and it became pretty clear that this was just what they were looking for.

When I got back to Brighton, Clearleft assembled as small squad to work on the project. Jon would handle the visual design, with the branding work of Dojo4 as a guide. For the front-end coding, we brought in some outside help. Seeing as the main deliverable for this project was going to be a front-end style guide, who better to put that together than the person who literally wrote the book on front-end style guides: Anna.

I’ll go into more detail about the technical side of things on the Clearleft blog (and we’ll publish the pattern library), but for now, let me just say that the project was a lot of fun, mostly because the people we were working with at Code for America—Mike, Dana, and Cyd—were so ridiculously nice and easy-going.

Anna and Jon would start the day by playing the unofficial project theme song and then get down to working side-by-side. By the end of the day here in Brighton, everyone was just getting started in San Francisco. So the daily “stand up” conference call took place at 5:30pm our time; 9:30am their time. The meetings rarely lasted longer than 10 or 15 minutes, but the constant communication throughout the project was invaluable. And the time difference actually worked out quite nicely: we’d tell them what we had been working on during our day, and if we needed anything from them; then they could put that together during their day so it was magically waiting for us by the next morning.

It’ll be a while yet before the new site rolls out, but in the meantime they’ve put together an alpha site—with a suitably “under construction” vibe—so that anyone can help out with the code and content by making contributions to the github repo.

A Gov Supreme

I’ve been doing some workshopping and consultancy at a few different companies recently, mostly about responsive design. I can’t help but feel a little bad about it because, while I think they’re expecting to get a day of CSS, HTML, and JavaScript, what they actually get is the uncomfortable truth that responsive design changes everything …changes that start long before the front-end development phase.

I explain the ramifications of responsive design, hammer on about progressive enhancement like a broken record, extoll the virtues of a content-first approach, exhort them to read A Dao of Web Design, and let them know that, oh, by the way, your entire way of working will probably have to change.

Y’see, it’s my experience that the biggest challenges of responsive design (which, let’s face it, now means web design) are not technology problems. Sure, we’ve got some wicked problems when dealing with non-flexible media like bitmap images, which fight against the flexible nature of the web, but thanks to the work of some very smart and talented people, even those kinds of issues are manageable.

No, the biggest challenges, in my experience, are to do with people. Specifically, the way that people work together.

Old waterfallesque processes where visual designers work entirely in Photoshop before throwing PSDs over the wall to developers just don’t cut it any more. Old QA testing processes that demanded visual consistency across all browsers and platforms are just ludicrous.

The thing is …those old processes were never any good. We fooled ourselves into thinking they worked, but that was only because we were working from some unfounded assumption: that everyone is on broadband, that everyone has a nice big screen, that everyone has a certain level of JavaScript capability. The explosion in diversity of mobile devices (and with it, the rise of responsive design) has shone a light on those assumptions and exposed those old processes for the façades that they always were.

When I’m doing a workshop and I tell that to designers, developers, and project managers, they often respond by going through the five stages of grief. Denial, anger, bargaining, depression …I try to work with them through those reactions until they ultimately get to acceptance.

Somewhere between the “bargaining” and “depression” phase, somebody inevitably passes the buck further up the chain:

“Oh, we’d love to do what you’re saying, but our clients would never go for it.” Or “You’ve convinced me but there’s no way our boss will ever agree to this.”

I’ve got to be honest: sometimes I think we use “the client” and “the boss” as a crutch. I’m also somewhat bemused when people ask me for advice to help them convince their client or their boss. I don’t know your boss—how could I possibly offer any relevant advice?

Still, I’ve written about this question of “How do I convince…?” before:

Something I’ve found useful in the past is the ability to point at trailblazers and say “like that!” Selling the idea of web standards became a whole lot easier after Doug redesigned Wired and Mike redesigned ESPN. It’s a similar situation with responsive design: clients are a lot more receptive to the idea now that The Boston Globe site is live.

When it comes to responsive design, there’s one site that should thoroughly shame anyone who claims that they can’t convince their boss to do the right thing: GOV dot UK.

It’s responsive. It puts user needs first. It’s beautiful. It even won the Design Museum’s design of the year, for crying out loud.

This isn’t some flashy lifestyle business. This isn’t some plucky young disruptive startup. This is the British government, an organisation so stodgy and bureaucratic that there are multiple sitcoms about its stodginess and bureaucracy.

Gov.uk is an inspiration. If the slowest-moving organisation in the country can turn itself around, embrace a whole new way of working, and produce a beautiful, usable, responsive site, then the rest of us really have no excuse.

Building Matter

When I was preparing my Responsive Enhancement workshop for last year’s dConstruct, I thought I should create an example site to demonstrate the various techniques I would be talking about to demonstrate how responsive design could be combined with progressive enhancement to make something works great on any device.

Round about that time, while I was scratching my head trying to figure out what the fake example site should be, I got an email from Bobbie asking if I wanted to meet up for a coffee and a chat. We met up and he told me about a project he wanted to do with his colleague Jim Giles. They wanted to create a place for really good long-form journalism on science and technology.

“The thing is,” said Bobbie, “we want to make sure it’s readable on phones, on tablets, on Kindles, everything really. But we don’t know the best approach to take for that.”

“Well, Bobbie, it’s funny you should mention that,” I said. “I’m currently putting together a workshop all about responsive design, which sounds perfect for what you want to do. And I need to create an example site to showcase the ideas.”

It was a perfect match. Bobbie gave me his design principles, personas, and—most importantly—content. In return, he would get a prototype that would demonstrate how that content could be readable on any device; perfect for drumming up interest and investment.

The workshop went really well, and some great ideas came out of the brainstorming the attendees were doing.

A few months later, Bobbie and Jim put the project—now called Matter—up on Kickstarter. They met their target, and then some. Clearly there was a lot of interest in well-written original journalism on the web. Now they had to build it.

They got hold of Phil to do the backend so that was sorted but Bobbie asked me if I knew any kick-ass designers and front-end developers.

“Well, I would love to work on it,” I said. “So how about working with Clearleft?”

“I didn’t think you guys would be available,” he said. “I’d love to work with you!”

And so we began a very fun collaboration. Paul moved his desk next to mine and we started playing around with the visual design and front-end development. Phil and Bobbie came by and we hammered out design principles, user journeys, and all that fun stuff.

Finishing up a great day of project planning with Bobbie and Phil

It was really nice to work on a project where readability took centre stage. “Privilege the reading experience” was our motto.

Paul did some fantastic work, not just on creating a typographic system, but also creating a brand identity including what I think is a really great logo.

Wearing my @ReadMatter T-shirt

I started putting together a system of markup and CSS patterns, using the device lab to test them. Phil started implementing those patterns using Django. It all went very smoothly indeed.

Testing Placekittening

Today is launch day. Matter is live. If you backed the project on Kickstarter, you’ve got mail. If not, you can buy the first issue for a mere 99 cents.

The first piece is a doozy. It’s called Do No Harm:

Why do some people want to amputate a perfectly healthy limb? And why would any doctor help them?

If this is indicative of the kind of work that Matter will be publishing, it will definitely live up to its ambitious promise:

MATTER commissions, crafts and publishes unmissable journalism about science, technology and the ideas shaping our future.

How do I convince…?

When I was speaking at An Event Apart in Austin I gave a somewhat rambling presentation. As usual, I was hammering home the importance of progressive enhancement, a methodology that’s actually not that tricky once you accept that websites do not need to look exactly the same in every browser and neither do websites need to be experienced exactly the same in every browser.

I had some time after the talk to answer a question or two from the audience—something I always enjoy. One of the questions went something along the lines of “All of this sounds great, but how do I convince my boss…?”

I smiled.

I smiled because I had been having exactly this same conversation with Beth at the opening party the night before. Here’s what I told her (and what I repeated in answering the person who asked the question)…

I’ve been giving talks since around 2005. At first I talked about DOM Scripting, trying to convince people that JavaScript wasn’t evil (at a time when JavaScript was very unpopular). Later I spoke about Ajax and progressive enhancement, hoping to persuade people to use the technology in a responsible way. Sometimes I gave talks about microformats. Later I got excited by HTML5 and spoke about that. More recently I’ve been talking about the importance of taking a Content First approach to responsive design.

Almost every time I gave a talk—no matter what the subject matter—someone would inevitably say “Yes, but how do I convince my boss?” or “That’s all well and good but how do I convince my clients?”

In fact, one time when I was giving a talk at From The Front in Italy, I made an extra slide that I kept in reserve after the final “thank you” slide. It simply read “How do I convince…?” Sure enough, when I was taking questions from the audience, someone asked that very question (and I advanced my slide deck and looked like a mind-reader).

The reason I mention this recurring trend is that I find it reassuring. We’ve been here before. What each one of my previous experiences has shown me is that things do change. Change might seem slow at times, but there’s a big difference between slow and static.

I remember the days of the web standards campaign. Trying to convince developers to use CSS for presentation instead of tables seemed like a Sisyphean task. But we got there.

I felt like a lone voice in the wilderness crying out in favour of liquid layouts for years, but now—thanks to the rise of responsive design—change has finally come. As for responsive design itself, I was sure it was going to be another uphill struggle to convince people of the benefits—and I was all set to take a hardline approach—but I’ve been pleasantly surprised to see that it’s an idea whose time has come.

I’m not the only one who has noticed this cyclical trend of new technologies and methodologies being pessimistically dismissed. Eric recently said:

So take heart. All of this has happened before. All of this will happen again.

But what about answering the question? How do you convince clients/bosses to adopt a new technique or technology?

I’m afraid that this is the point at which I tend to throw up my hands and say, “Don’t ask me! That’s not my job—I just make websites.” But it’s a perfectly valid question and I think it would be good to have resources we could all point to when we need some ammunition.

Luke is exceptionally good at providing data to back up his arguments. I wrote on the back of his book:

Luke doesn’t just rely on his wondrous wit and marvellous writing style to make an overwhelmingly convincing argument for designing the mobile experience first; he also hammers home all of his points with oodles and oodles of scrumptious data.

That’s a good tactic. As he once said to me, “If you torture data for long enough, you can get it to confess to anything.”

Something I’ve found useful in the past is the ability to point at trailblazers and say “like that!” Selling the idea of web standards became a whole lot easier after Doug redesigned Wired and Mike redesigned ESPN. It’s a similar situation with responsive design: clients are a lot more receptive to the idea now that The Boston Globe site is live. But of course if you only ever follow the trailblazers, you’ll never get the opportunity to blaze a new trail yourself. Frustrating.

Another tactic that I’ve used in the past is to simply not ask for permission, but go ahead and use the new technologies and techniques anyway. That isn’t always practical but it’s worth a try. Rather than spending valuable time trying to convince your boss or client that they should let you do something, just do it (if you’ll pardon the Nike-ian platitude).

Andy likens the “How do I convince…?” conundrum to having a plumber come ‘round to fix your sink, only to ask you “Is it alright if I use this particular wrench?” You’re the plumber—you decide!

Except we’re not in the plumbing business (and we’re clearly not in the metaphor business either).

I do sometimes wonder whether we use the big bad client or the big bad boss as a crutch. “Oh, I’d love to try out this technique, but the client/boss would never go for it. Something something IE6.” Maybe we’re not giving them enough credit. Given the right argument, they might just listen to reason.

Secret src

There’s been quite a brouhaha over the past couple of days around the subject of standardising responsive images. There are two different matters here: the process and the technical details. I’d like to address both of them.

Ill communication

First of all, there’s a number of very smart developers who feel that they’ve been sidelined by the WHATWG. Tim has put together a timeline of what happened:

  1. Developers got involved in trying to standardize a solution to a common and important problem.
  2. The WHATWG told them to move the discussion to a community group.
  3. The discussion was moved (back in February), a general consenus (not unanimous, but a majority) was reached about the picture element.
  4. Another (partial) solution was proposed directly on the WHATWG list by an Apple employee.
  5. A discussion ensued regarding the two methods, where they overlapped, and how the general opinions of each. The majority of developers favored the picture element and the majority of implementors favored the srcset attribute.
  6. While the discussion was still taking place, and only 5 days after it was originally proposed, the srcset attribute (but not the picture element) was added to the draft.

A few points in that timeline have since been clarified. That second step—“The WHATWG told them to move the discussion to a community group”—turns out to be untrue. Some random person on the WHATWG mailing list (which is open to everyone) suggested forming a Community Group at the W3C. Alas, nobody else on the WHATWG mailing list corrected that suggestion.

Then there’s apparent causality between step 4 and 6. Initially, I also assumed that this was what happened: that Ted had proposed the srcset solution without even being aware of the picture solution that the Community Group had independently come up with it. It turns out that’s not the case. Ted had another email about the picture proposal but he never ended up sending it. In fact, his email about srcset had been sitting in draft for quite a while and he only sent it out when he saw that Hixie was finally collating feedback on responsive images.

So from the outside it looked like there was preferential treatment being given to Ted’s proposal because it came from within the WHATWG. That’s not the case, but it must be said: the fact that srcset was so quickly added to the spec (albeit in a different form) doesn’t look good. It’s easy to understand why the smart folks in the Responsive Images Community Group felt miffed.

But let’s be clear: this is exactly how the WHATWG is supposed to work. Use-cases are evaluated and whatever Hixie thinks is best solution gets put in the spec, regardless of how popular or unpopular it is.

Now, if that sounds abhorrent to you, I completely understand. A dictatorship should cause us to recoil.

That’s where the W3C come in. Their model is completely different. Everything is done by committee there.

Steve Faulkner chimed in on Tim’s post with his take on the two groups:

It seems like the development of HTML has turned full circle, the WHATWG was formed to overthrow the hegemony of the W3C, now the W3C acts as a counter to the hegemony of the WHATWG.

I think he’s right. The W3C keeps the rapid, sometimes anarchic approach of the WHATWG in check. But the opposite is also true. Without the impetus provided by the WHATWG, I’m not sure that the W3C HTML Working Group would ever get anything done. There’s a balance that actually works quite well in practice.

Back to the situation with responsive images…

Unfortunately, it appears to people within the Responsive Images Community Group that all their effort was wasted because their proposed solution was summarily rejected. In actuality all the use-cases they gathered were immensely valuable. But it’s certainly true that the WHATWG didn’t make it clearer how and where developers could best contribute.

Community Groups are a W3C creation. They don’t have anything to do with the WHATWG, who do all their work on their own mailing list, their own wiki and their own IRC channel.

I do think that the W3C Community Groups offer a good place to go bike-shedding on problems. That’s a term that’s usually used derisively but sometimes it’s good to have a good ol’ bike-shedding without clogging up the mailing list for everyone. But it needs to be clear that there’s a big difference between a Community Group and a Working Group.

I wish the WHATWG had done a better job of communicating to newcomers how best to contribute. It would have avoided a lot of the frustrations articulated by Wilto:

Unfortunately, we were laboring under the impression that Community Groups shared a deeper inherent connection with the standards bodies than it actually does.

But in any case, as Doctor Bruce writes at least now there’s a proposed solution for responsive images in HTML: The Living Standard:

I don’t really care which syntax makes the spec, as long as it addresses the majority of use cases and it is usable by authors. I’m just glad we’re discussing the adaptive image problem at all.

So let’s take a look at the technical details.

src code

The Responsive Images Community Group came up with a proposal based off the idea of minting a new element, called say picture, that mimics the behaviour of video

<picture alt="image description">
  <source src="/path/to/image.png" media="(min-width: 600px)">
  <source src="/path/to/otherimage.png" media="(min-width: 800px)">
  <img src="/path/to/image.png" alt="image description">
</picture>

One of the reasons why a new element was chosen rather than extending the existing img element was due to a misunderstanding. The WHATWG had explained that the parsing of img couldn’t be easily altered. That means that img must remain a self-closing element—any solution that requires a closing /img tag wouldn’t work. Alas, that was taken to mean that extending the img element in any way was off the cards.

The picture proposal has a number of things going for it. Its syntax is easily understandable for authors: if you know media queries, then you know how to use picture. It also has a good fallback for older browsers : a regular img element. This fallback mechanism (and the idea of multiple source elements with media queries) is exactly how the video element is specced.

Unfortunately using media queries on the sources of videos has proven to be very tricky for implementors, so they don’t want to see that pattern repeated.

Another issue with multiple source elements is that parsers must wait until the closing /picture tag before they can even begin to evaluate which image to show. That’s not good for performance.

So the alternate solution, based on Ted’s proposal, extends the img element using a new srcset attribute that takes a comma-separated list of values:

<img alt="image description"
src="/path/to/fallbackimage.png"
srcset="/path/to/image.png 800w, /path/to/otherimage.png 600w">

Not nearly as pretty, I think you’ll agree. But it is actually nice and compact for the “retina display” use-case:

<img alt="image description" src="/path/to/image.png" srcset="/path/to/otherimage.png 2x">

Just to be clear, that does not mean that otherimage.png is twice the size of image.png (though it could be). What you’re actually declaring is “Use image.png unless the device supports double-pixel density, in which case, use otherimage.png.”

Likewise, when I declare:

srcset="/path/to/image.png 600w 400h"

…it does not mean that image.png is 600 pixels wide by 400 pixels tall. Instead, it means that an action should be taken if the viewport matches those dimensions.

It took me a while to wrap my head around that distinction: I’m used to attributes describing the element they’re attached to, not the viewport.

Now for the really tricky bit: what do those numbers—600w and 400h—mean? Currently the spec is giving conflicting information.

Each image that’s listed in the srcset comma-separated list can have up to three values associated with it: w, h, and x. The x is pretty clear: that’s the pixel density of the device. The w and h values refer to the width and height of the viewport …but it’s not clear if they mean min-width/height or max-width/height.

If I’m taking a “Mobile First” approach to development, then srcset will meet my needs if w and h refer to min-width and min-height.

In this example, I’ll just use w to keep things simple:

<img src="small.png" srcset="medium.png 600w, large.png 800w">

(Expected behaviour: use small.png unless the viewport is wider than 600 pixels, in which case use medium.png unless the viewport is wider than 800 pixels, in which case use large.png).

If, on the other hand, w and h refer to max-width and max-height, I have to take a “Desktop First” approach:

<img src="large.png" srcset="medium.png 800w, small.png 600w">

(Expected behaviour: use large.png unless the viewport is narrower than 800 pixels, in which case use medium.png unless the viewport is narrower than 600 pixels, in which case use small.png).

One of the advantages of media queries is that, because they support both min- and max- width, they can be used in either use-case: “Mobile First” or “Desktop First”.

Because the srcset syntax will support either min- or max- width (but not both), it will therefore favour one case at the expense of the either.

Both use-cases are valid. Personally, I happen to use the “Mobile First” approach, but that doesn’t mean that other developers shouldn’t be able to take a “Desktop First” approach if they want. By the same logic, I don’t much like the idea of srcset forcing me to take a “Desktop First” approach.

My only alternative, if I want to take a “Mobile First” approach, is to duplicate image paths and declare ludicrous breakpoints:

<img src="small.png" srcset="small.png 600w, medium.png 800w, large.png 99999w">

I hope that this part of the spec offers a way out:

for the purposes of this requirement, omitted width descriptors and height descriptors are considered to have the value “Infinity”

I think that means I should be able to write this:

<img src="small.png" srcset="small.png 600w, medium.png 800w, large.png">

It’s all quite confusing and srcset doesn’t have anything approaching the extensibility of media queries, but I hope we can get it to work somehow.

Responsive questions

I got an email from Ben Frain recently asking if I’d answer some questions for an upcoming article in MacUser UK about responsive design. Seeing as this is a topic I could natter on about endlessly, I happily obliged.

Here are my answers to his questions. There’s a good chance that much of this will get trimmed or altered for the final article so I figured I’d share my verbatim responses here.

When you first looked at responsive web design methodology, can you remember your initial reaction?

Before Ethan wrote his seminal article in A List Apart, I saw him giving a presentation at An Event Apart in which he outlined the ideas of responsive design. My reaction was “Yes! Yes! Yes!”

Ethan was essentially describing all-round best practices for the web in general, taking progressive enhancement to the next level. But the reason why people started paying attention was because of the timing; the idea of websites being accessed by browsers with all sorts of screen dimensions was no longer an abstract concept, it was a very real description of web browsing demographics.

So my overall reaction to responsive web design was “Finally! Maybe now web designers and developers will really start embracing the web as its own medium.” It’s no surprise that Ethan’s article in A List Apart referenced A Dao Of Web Design by John Allsopp—a piece of writing that should serve as a manifesto for everyone working on the web.

Have you been surprised that responsive web design has become the zeitgeist of the front end community for the past 18 months or so?

I’m not surprised that responsive web design has struck a chord. I only wish it could have happened sooner. While media queries are a relatively recent innovation, we’ve always had the ability to create fluid layouts. And yet web designers and developers have wilfully ignored that fact, choosing instead to create un-webby fixed-width layouts.

In taking a batch of related technologies—liquid layouts, media queries, and fluid images—and then grouping them together under one banner—responsive web design—Ethan made it a lot easier for people to talk about this approach to designing and building web sites. There’s a real power in naming related technologies like this. We saw the same explosion of discussion and creativity when Jesse James Garrett coined the term Ajax.

How long after understanding it did you create your first working example (either client work or ‘playground’ work)?

I was already making liquid layouts. In fact, every single site I’ve ever built for over a decade has used percentages by default. Because of that, I was already familiar with the challenges of fluid images and the work done by Richard Rutter (which Ethan references). I had started to dabble with media queries on my own personal projects but seeing Ethan’s proof-of-concept was just the incentive I needed to start implementing them on client sites.

As the methodology gained traction, it started to get a lot of flak from some quarters, often mobile developers. What do you put that down to?

I think a lot of people misunderstood what problems responsive design was claiming to solve. It was never specifically about mobile devices or users in a mobile context; it was always about adapting layout to varying viewport sizes.

A lot of people seemed to be angry that responsive web design didn’t appear to solve any issues relating to bandwidth or context. It’s true that responsive web design doesn’t solve those problems …it also doesn’t cure cancer. It never claimed to.

Responsive design isn’t about mobile. Neither is it about the desktop. It’s about the web.

Whilst no one set of principles can be considered a panacea or magic bullet - are there specific instances where you’d argue against a responsive web design for a clients site?

Honestly, no. But the reason I say that is that, once you’re used to creating responsive sites, it’s really no extra effort. So I’m not saying that every project needs to go that extra mile—quite the opposite. I’m saying that sites that adapt to the user’s device should be the default (and should have always been the default).

The only time I would argue that a client shouldn’t have a responsive site is if the client shouldn’t have a web site at all.

Just to clarify: I’m not saying that the client couldn’t also have subdomains or apps targeted at specific classes of device as well as their responsive web site. But the baseline to having any presence on the web should be a website that works for everyone, everywhere.

That said, it’s a lot easier to create a responsive site from scratch than to attempt to retro-fit an existing desktop-centric site. In that situation, where the desktop-centric site is just too big and bloated to serve up to mobile devices, a separate mobile-specific site can be a good stop-gap measure. But in the long run, maintaining multiple silos just doesn’t scale. Also, the fact that the site is too big and bloated for mobile probably means it’s too big and bloated for anyone, regardless of their device.

For a client who has neither the business necessity or budget for a ‘mobile specific’ website (let me qualify that by saying that I term a ‘mobile specific’ website as one that has some server side functionality to ‘sniff’ the device and serve up entirely different experience based upon it), is there any better option for clients to get themselves a mobile ready presence?

Well, yeah: a responsive web site! It might not be specifically targeted at mobile devices but, if it’s done right, it won’t be specifically targeted at any particular class of device.

At present, although server (e.g. adaptive-images) and JS (Scott Jehl’s <picture>) based solutions exist, responsive design struggles when it comes to responsive images as there is no way to provide alternate images based on media capability or connection speed (one day please!) through markup alone. What would you like to see happen to combat this issue?

There’s some great work being done by the W3C Responsive Images community group. I’m hoping to see some rapid adoption by browsers. But mostly, what I’d like to see is exactly what’s going on: a bunch of really smart people getting together to collectively solve this problem in a backwards-compatible way. I find it quite inspiring, actually.

What are some obvious pitfalls people should avoid when implementing a responsive design?

The biggest mistake I’ve seen is when developers try to treat responsiveness as an add-on, something to be bolted on at the end of the development process. That’s going to lead to a world of pain.

Responsive design makes most sense when it’s paired with the idea of Mobile First. Thinking about the screen size and capabilities of mobile devices first forces you to focus and really think about what’s absolutely essential to deliver. When you don’t have the luxury of a large viewport or a fast connection, you’ll quickly find that complicated navigation and unnecessary page cruft will quickly get trimmed.

In fact, that approach isn’t really about mobile specifically, it’s about focusing on the content. Content First.

Personally, I’d like to see some ability to visually re-construct the DOM through CSS alone - so media queries could literally place anything anywhere. Do you feel that specs like CSS Regions hold the answers to that problem?

I’m much more excited about flexbox, but that might just be because I haven’t examined CSS Regions in any depth.

Flexbox is going to be a game-changer, I think. Source order will still matter for older browsers, but we’ll be able to serve up just about any layout regardless of source order. It’ll be great to finally have that real separation of concerns.

Whether it’s flexbox or regions, I look forward to the day when we can stop using layout hacks like floats, because let’s face it, floats are a hack: they were never intended for layout.

Although tools like Adobe Shadow (Weinre) are emerging, existing prototyping tools like FireWorks are limited when it comes to fluid designs - do you prototype/design there or do you do a lot of designing in browser?

Fireworks and Photoshop are useful tools for designing elements of a site’s design but they are woefully inadequate at conveying the fluid dynamic nature of the browser. For that reason, I think it makes a lot of sense to get into the browser as soon as possible (it also means you can start testing your designs sooner).

Spending a lot of time making high-fidelity comps isn’t very efficient, I feel. A lot of that time would be better spent trying things out in the browser and reacting to how they behave at different sizes.

Some people have claimed that designing in the browser is much more limiting than designing in Fireworks or Photoshop, but I think that just comes down to what you’re used to. Those tools come with their own constraints (a fixed-width canvas and lack of interaction being the obvious ones).

Also, if there are certain things that can only be done in a tool like Fireworks and not in a web browser, then what’s the point of doing them? Unless you’re planning to just export your design as one big image, you’re going to have to translate that Fireworks comp into markup and CSS at some point. There’s no point in creating something that can’t be translated.

Graphic design tools still have their place. One of the techniques I find works really well with responsive design is the creation of Style Tiles. These allow you to nail down the visual vocabulary of a project without getting into the nitty-gritty of page layout. They are less wishy-washy than mood boards but not as time-consuming and high-fidelity as page comps.

Can you sum up, in general terms, the key things you think people should consider when building sites today?

I’ve found that it makes sense to apply the principle of progressive enhancement to everything: layout, images, and content:

  • Use small images by default.
  • Don’t apply any layout in your CSS.
  • Start with the content that is absolutely essential.

Once you’ve got that baseline working well, then you can start to progressively enhance the site:

  • Load in larger images if the screen size permits it.
  • Use a grid for page layout, but keep the CSS declarations for the grid within media queries.
  • Use Ajax to conditionally load non-essential content for larger screens.

Don’t start a design by thinking about the desktop layout. But don’t start by thinking about the mobile layout either. Instead, think about the content. And when I say “content”, I don’t mean “copy.” Your content could be a task, like adding an item to a shopping cart. Focus on the core task that your user wants to accomplish.

Separating out the content (reading an article, buying a pair of shoes) from the delivery mechanism (a desktop browser, a mobile browser, a tablet) requires a different mindset to the way web sites have traditionally been built. But much like the change in mindset that was required when we changed from tables for layout to CSS, it’s incredibly rewarding.

Citation needed

Over on the HTML5 Doctor site, Oli has written a great article called Quoting and citing with <blockquote>, <q>, <cite>, and the cite attribute.

Now, I still stand by my criticism of the way the cite element has been restrictively redefined in HTML5 such that it’s not supposed to be used for marking up a resource if that resource is a person. But I think that Oli has done a great job in setting out the counter-argument:

By better defining <cite>, we increase the odds of getting usable data from it, though we now need different methods to cover these other uses.

Oli’s article also delves into the blockquote element, which is defined in HTML5 as a sectioning root.

Don’t be fooled by the name: sectioning roots are very different to sectioning content in a fundamental way. Whereas sectioning content elements—section, article, nav and aside—are all about creating an explicit outline for the document from the headings contained within the sectioning content (using the new outline algorithm), the headings within sectioning roots (blockquote, td, fieldset, figure, etc.) don’t contribute to the document outline at all! But what sectioning roots and sectioning content have in common is that they both define the scope of the header and footer elements contained within them.

The footer element is defined as containing information about its section such as who wrote it, links to related documents, copyright data, and the like.

This gives a rise to rather lovely markup pattern that’s used on HTML5 Doctor: why not use the footer element within a blockquote to explicitly declare its provenance:

<blockquote>
<p>The people that designed .mobi were smoking crack.</p>
<footer>&mdash;<cite class="vcard">
<a class="fn url" href="http://tantek.com/">Tantek Çelik</a>
</cite></footer>
</blockquote>

(and yes, I am using the cite element to mark up a person’s name there).

Well, apparently that blockquote pattern is not allowed according to the spec:

Content inside a blockquote must be quoted from another source.

Because the content within the blockquote’s footer isn’t part of the quoted content, it shouldn’t be contained within the blockquote.

I think that’s a shame. So does Oli. He filed a bug. The bug was rejected with this comment:

If you want the spec to be changed, please provide rationale and reopen.

That’s exactly what Oli is doing. He has created a comprehensive document of block quote metadata from other resources: books, plays, style guides and so on.

Excellent work! That’s how you go about working towards a change in the spec—not with rhetoric, but with data.

That’s why my article complaining about the restrictions on the cite element is fairly pointless, but the wiki page that Tantek set up to document existing use cases is far more useful.

Veerle Pieters: The Experimental Zone

The next speaker at An Event Apart in Boston is Veerle Pieters. I’m going to try liveblogging some of what she’s got to say.

Veerle’s talk is called The Experimental Zone and it’s all about experimentation in web design. People often ask her how she comes up with, say, certain colour combinations but she doesn’t really have a straightforward answer—a lot of it is down to experimentation. So it’s good to learn how to experiment better. Pablo Picasso said:

Inspiration exists, but it has to find you working.

Spirographs seem complex but they are a perfect example of how experimenting with really simple fundamental rules and shapes can lead to a beautiful result: start with a simple, translucent square and start applying the same transform multiple times e.g. scaling 85% and rotate -10 degrees. Object: transform: transform again.

You can also experiment with colours in spirographs. Start with a translucent triangular shape, copy and rotate it by 18 degrees but before that, change the colour values. Try different blending modes and see what comes out. Combine different layer modes with different scaling values e.g. 115%. Try different rotation angles to see how they turn out. An extreme value like 48 degrees applied to translucent circular shapes of different colours leads to some interesting results.

Transparency. Blending. Scale. Rotation. Colour. Experiment with those combinations.

But why play around with this stuff? Well, Veerle used some of the results in client work for some background images on sites and on physical credit card designs.

Start with some circles in the colours defined by the client’s in-house style guide and start experimenting with combinations. It’s okay to try out a dozen versions. When you really need to have control, you can get in there and change the overlapping colour combinations manually.

Veerle also does small experiments not related to work; a little every day. She’s got a folder full of patterns and experiments that she hasn’t used yet but they might come in handy later on. Another example of experimentation was the Duoh Christmas card. She began with a star and started experimenting with repeating patterns. Those experimentations didn’t lead anywhere so she went back to the star and tried a different approach. That’s often the way things work out: you have a starting point, you experiment from there and if it doesn’t work out, return to the starting point and try a different direction. For the Christmas card, scaling the star to different sizes with different colours and opacity lead to the final result.

Logo design works in a similar way. The typeface is the starting point (in this example, Dessau Pro Regular). Veerle tweaked the letter shapes and started experimenting with shapes within the shapes. In this example, Veerle took the bowl of the letter A and starting duplicating and rotating, getting some really nice results. You’re playing around and then suddenly you go “Oh, that’s it: that’s what I was looking for.”

Veerle sketches her ideas down. For her own blog, she started sketching variations based around the letter V but she didn’t like any of the results so she left the sketchbook and jumped into Illustrator. Sometimes it’s a bit of both that works: experiments in a sketchbook and in Illustrator.

If time permits, Veerle likes to leave a design (like a logo) alone for a while. Then come back to it and see if you still like it. For her blog, the initial logo she created didn’t stand up to this test. So she went back to the starting point, the letter V, and went in a different direction, keeping the elements she liked from the previous attempt—like the colours—but experimenting with shapes.

Mood boards can be useful for getting started. For the book cover of Aaron’s forthcoming book Adaptive Web Design, she began with her scrapbook-like collection of images and started putting some together into a mood board, trying to visualise the concept of progressive enhancement. The first design direction was ruled to be a bit too abstract. So the simple cubes were ditched in favour of something more sophisticated. The end result is the chameleon on the cover—it’s built of abstract shapes and many colours, but the result is something recognisable.

“Let’s experiment,” says Veerle.

As Erik Spiekermann has said, you can be inspired by something but you can’t just copy it wholesale. But Veerle likes to begin by reproducing something side-by-side and then, maybe a few days later, try to reproduce it without the original. The result is stamped with your own take on the original. She did this with the book cover for Imaginary Cities.

She started sketching it from memory. Her version turned out different; more cube-based. She imported that sketch into illustrator and started making outlines with the pen tool. Once the tracing is done, she started filling in shapes with translucent colours. She used the colour picker to take colours from some of the overlapping shapes for use in a different layer with a different mode: the resulting colour fill is very different. She didn’t know what the end result would be but she just tried things out. Once the colours have been gathered together, she created some gradients with them and applied them to some of the cubes. Then she added some dashed lines that she recalled from original cover. Finally, she upped the contrast.

But let’s go a step further. Let’s try to do this with CSS.

Alex Walker’s article The Cicada Principle is all about introducing pseudo-randomness into tiled multiple background images: the image sizes are all based on different prime numbers. The result looks random. For the curtain example, a ruffle is the base unit: the first image has 1 unit, the second image has 3 units, the third has 7 units.

Veerle takes this idea and applies to her cube-based design. She went with multiples of 3: 300 pixels, 600 pixels, 900 pixels. The result is a great backdrop of overlapping cubes and no matter how wide your browser window, you won’t see the repeat. You can see in action at http://www.duoh.com/varia/cicada.

Veerle has some practical tips to finish with.

  • Name your layers. Turn off that preference in Photoshop that says “Add ‘copy’ to copied layers”.
  • If you rotate a bitmap, you sometimes end up with odd shifting pixels that look blurry. Change the point of origin of the rotation: use one of the corners instead of the centre.
  • If you paste from Illustrator to Photoshop the result can be blurry. Before pasting, select exactly the size you want to paste in. Experiment to find the right size to avoid blurring.
  • Tychpanel by Reimund Trost is a very handy tool for calculating sizes.
  • Another useful tool is a plugin called Guide Guide by Cameron McEfee which is particularly useful for grids.
  • Extensible baseline grids by Mike Precious is also really handy technique for creating a baseline grid.
  • When tweaking letter shapes and spacing, for a logo, for example, try turning the letters upside down to get a different perspective. It can be clearer what needs to be tweaked.
  • Colour management is tricky. Some people turn sRGB off for exporting to the web to avoid colour shifting. Actually you need to set up your environment the right way. Calibrate your screen. Then set up colour management for Adobe Creative Suite. Veerle chose the Adobe RGB environment: she works in print as well as web, so just using sRGB isn’t going to work for her. Have your environment set up to have a wide gamut; you can always narrow it down for specific exports like for the web, for example.
  • When importing, assign a profile rather than converting to a profile. Converting is a destructive process whereas assigning a colour profile doesn’t actually alter the image file.

Veerle likes to start by forgetting about technical constrains and just experimenting in a free-form way. That can lead to more creative, new ideas instead of limiting yourself. Of course you can’t go too far, but still, there’s a good zone for experimentation.

Testing HTML5

dConstruct week is in full swing. The conference itself is tomorrow. Remy and Brian are doing their workshops today. Myself, Rich and Nat did our HTML5 and CSS3 Wizardry workshop yesterday.

I was handling the HTML5 side of things and had quite a bit of fun with it. I put together an HTML5 pocket book using using Natalie’s superb CSS. View it in a Webkit or Gecko-based browser and then print it out to experience the CSS3 transform magic. Natalie made a CSS3 pocket book for the workshop which was a nice self-documenting example of CSS transforms. Hers turned out much neater than mine—my folding fu isn’t so good. But hey, it’s the thought that counts and I figured it was nice to give every attendee something hand-crafted.

I prepared some exercises for the workshop and I have to admit that I had an ulterior motive with one of them. Each attendee was provided with two sheets of paper. One sheet of paper listed some new elements in HTML5 in alphabetical order:

  1. article
  2. aside
  3. details
  4. figure
  5. footer
  6. header
  7. hgroup
  8. nav
  9. section

On another sheet of paper, I listed definitions of those elements taken from the spec but in no particular order:

  • …a group of introductory or navigational aids.

  • …represents a section of a page that consists of content that is tangentially related to the content around it, and which could be considered separate from that content.

  • …used to group a set of h1–h6 elements when the heading has multiple levels, such as subheadings, alternative titles, or taglines.

  • …typically contains information about its section such as who wrote it, links to related documents, copyright data, and the like.

  • …some flow content, optionally with a caption, that is self-contained and is typically referenced as a single unit from the main flow of the document.

  • …a section of a page that links to other pages or to parts within the page: a section with navigation links.

  • …a thematic grouping of content, typically with a heading, possibly with a footer.

  • …a section of a page that consists of a composition that forms an independent part of a document, page, or site.

  • …additional information or controls which the user can obtain on demand.

I then asked the attendees to match up the definitions with the element whose name sounded like the best match. To be clear: this wasn’t a test of knowledge. I was testing the spec.

Giving this exercise to thirty very savvy web developers yielded some clear results. There’s definitely a lot of confusion around when to use section and when to use article. I’m not convinced that there needs to be two different elements, especially now that the article element no longer takes the cite or pubdate attributes. figure and aside were also an area of confusion.

When the workshop was over, I collected the pages with everyone’s answers. Once I get some time I’ll publish the results, probably in a spreadsheet. Then I can present that data to the WHATWG list. Some people on IRC were wondering why my superfriends and I haven’t presented our concerns by email. Well, we will. But I think there’s a lot of value in publicly discussing this stuff (and soliciting feedback). Mostly though, I’ll feel a lot more comfortable about raising an issue if I can back it up with some data. There’s a big difference between telling Hixie your opinion and giving Hixie data.

So, in a very real sense, I got a lot of the workshop. It took quite a while to put the workshop together. The face-to-face meeting with my unicorn-powered peers in New York proved to be absolutely invaluable. I was tweaking the slides right up till the day of the workshop; not because I was rearranging the content, but because the spec was literally changing overnight (albeit in small ways).

Now that the workshop is over, I can relax. And relax I will …in Canada. I’m off to Whistler this weekend for Jessica’s brother’s wedding, followed by a couple of days in Vancouver.

Alas, that means I won’t be around for all of dConstruct. I’ll be able to catch Adam Greenfield followed by Mike Migurski with Ben Cerveny before heading up to Heathrow. But I won’t be able to make it to BarCamp.

Well, I’m sure that everyone who’s coming to Brighton will have plenty of fun without me. And I plan to have plenty of fun in British Columbia …though at some stage, I need to make some time to collate all that yummy data from the workshop.

HTML5 and me

I can never pinpoint the exact moment at which I “get into” a particular technology. CSS, DOM Scripting, microformats …there was never any Damascene conversion to any of them. Instead, I’d just notice one day, after gradually using the technology more and more, that I was immersed in it.

That’s how I feel about now.

There’s another feeling that accompanies this realisation. I remember feeling it about CSS in the late 90s and about DOM Scripting half a decade ago. At the same time as I look up from my immersion, I cast a glance around the web development landscape and ask Why aren’t more people paying attention to this?

In the case of HTML5, this puzzling state of affairs can, to a large extent, be explained by the toxic 2022 meme. Working web developers with an idle interest in HTML5 would google the term, find a blog post telling that it won’t be “ready” until 2022, and then happily return to their work, comforted by the knowledge that HTML5 was some distant dream on the horizon—one that doesn’t affect them in any way today.

Nothing could be further from the truth. The Last Call Working Draft status is (optimistically) planned for October; that’s one month away.

And what rough beast, its hour come round at last, slouches towards Bethlehem to be born?

If you want to have a say in the formation of the most important web standard in existence, don’t put off getting involved. As Bruce says, If you don’t vote, you can’t bitch.

Still, I think the attitude of most web developers towards HTML5 right now is, at the very least, “interested, if a little sceptical”—that’s certainly how I felt when I started dabbling in it.

A little while back, I got together with some of my interested (if a a little sceptical) colleagues in New York, thanks to a generous invitation from Zeldman.

Dan Cederholm, Jeremy Keith, Eric Meyer, Ethan Marcotte, Tantek Çelik, Nicole Sullivan, Wendy Chisholm

After a fairly intense two days of poring over the spec, I think it’s fair to say that, on balance, the interest increased and the scepticism decreased. That’s not to say that everything looks rosy in the current incarnation of HTML5. When you’ve got some of the smartest front-end web developers I know of in the same room together and they all agree that some parts of the spec are confusing or downright wrong, that’s quite worrying.

On the plus side, most of the issues are pretty minor in the grand scheme of things. It’s fair to say that most of the stuff that interests web authors—the semantic side of things—only accounts for a small part of HTML5. Most of the HTML5 specification is about error handling, APIs and shiny new interactive content. There are plenty of programmers and browser makers forging those powerful new tools. But as qualified as they are to hammer out those complex constructs, they are not necessarily the most qualified to make decisions on creating new structural elements. For that, you need the input of authors. And authors have been decidedly slow to get involved with HTML5.

It’s time for authors to get involved. I believe our voices will be welcomed. According to the HTML design principles:

…consider users over authors over implementors over specifiers over theoretical purity.

I’ll get the ball rolling with my own little list of things that are troubling me…

small

I’m with Bruce and Remy. If the small element is being redefined for disclaimers, caveats, legal restrictions, or copyrights, it needs to be handling how that kind of content is published in the wild. That means it needs to be able to wrap paragraphs, lists and other flow content.

Alternatively, it should go the way of its evil twin, the big element, and simply be deprecated …sorry, I mean obsolete and non-conforming.

time

I’ll join in the chorus of people who think that the restrictions on the information that the new time element can contain are unnecessarily draconian. You can encode a date and time, you can encode a date, but you can’t encode just a month and a year. So you can’t make a piece of information like “April 1912” machine-readable. The spec says the time element:

…is intended as a way to encode modern dates and times in a machine-readable way

Which is great. But the sentence doesn’t finish there. It goes on:

so that user agents can offer to add them to the user’s calendar.

That’s one use case! I don’t think it’s wise to rain on the parade of anyone wanting to build, say, timeline mashups. Trying to mandate use cases ahead of time is not just counter-productive, it’s probably impossible. Can you imagine if Flickr had launched their API with strict instructions that it could only be used for one particular purpose?

figure

I have nothing against the figure element itself, although it does seem uncomfortably close to aside, but the insistence on recycling the legend element to handle the caption is problematic.

Don’t get me wrong: I’m all for re-using existing elements rather then creating new ones, and I know that Hixie looked at all the options. But the way that browsers currently treat the legend element makes it unusable outside of a form.

I think that the label element could work instead.

details

Just like figure, the details element reuses legend. In this case, label won’t do the trick. details is an interactive element and it doesn’t look like the label element can be made keyboard accessible.

In this case, as undesirable as it is, a new element may be called for.

article

I’ve got two issues with the article element.

  1. Firstly, its definition sounds awfully similar to section. I’m not convinced that there needs to be two different elements. Having two elements that look like a duck, walk like a duck and quack like a duck is just going to lead to confusion amongst authors wondering which duck to use.

  2. The article element, unlike the section element can take an optional pubdate attribute to encode the publication date. I’m all in favour of having this information be machine-readable but the pubdate attribute smells like dark data, subject to metacrap rottage. In most cases, the publication date will be repeated in the content of the article anyway, so I’m in favour of adding a flag there rather than duplicating data. A Boolean pubdate attribute on a time element within an article header or footer should do the trick.

    Update: Belay that last gripe, ensign. As proof of just how fast this spec moves, less than 24 hours after I published this, Hixie has implemented what I was suggesting.

Speaking of footer, this one is the biggie…

footer

There is a big disconnect between what the HTML5 spec calls a footer and what authors on the web call a footer.

According to the spec, you’re only supposed to put some kinds of content inside a footer:

Flow content, but with no heading content descendants, no sectioning content descendants, and no header or footer element descendants.

That means no nav or headings in footer. The way that the footer element is defined in the spec, it’s a slightly more expanded version of address.

Ah, address! One of the most problematic elements in HTML 4. It is often incorrectly used to mark up street addresses. But is it any wonder? When an element has a name address, it’s hardly surprising that authors are going to use it for marking up addresses. The same thing is going to happen with footer.

The term “footer” was not invented for HTML5. It’s been in use on the web for years and in print for even longer. But if you ask any author to define what they mean by the term “footer”, you’ll get a very different definition to the one in the HTML5 spec. They may even point to specific examples of footers on sites like Flickr or on blogs, where they contain headings and navigation.

To be fair, when the new structural elements were being forged back in 2005, there wasn’t as much prevalence of what Derek Powazek termed fat footers. So when Hixie ran his analytics on a shitload of web pages crawled by Google and found that “footer” was by far the most common class name, most footer content was pretty meagre. But usage changes (see also: time).

The way that the element named footer is defined in HTML5—to be used multiple times in a single document in sections and articles as well as at the document level—is very different from the convention named footer in common usage on the web today. Most of the instances of what authors call a footer are more like what the HTML5 spec defines as aside.

I don’t want to spend the next decade telling authors not to mark up their footers as footers. It was bad enough telling people not to mark up addresses as addresses. In any case, authors aren’t going to listen. If they see there’s an element called footer, they will assume it refers to the device known as a footer, and mark up their content accordingly. At that point, the HTML5 spec will have become a work of fiction instead of documenting what’s actually on the web.

One of two things needs to happen. Either:

  1. The content model of footer is updated to match that of header, which is much more liberal in what it accepts, or:
  2. The name of the element currently called footer should be changed to match the current, restrictive definition. I suggest using contentinfo, which is the name of an existing ARIA role for exactly this kind of content.

ARIA roles, by the way, are an excellent addition to HTML5. ARIA integration is a win for ARIA and a win for HTML5, in my opinion. Most of all, it’s a win for authors who now have a whole swathe of extra semantics they can sprinkle into their documents (and use as styling hooks with attribute selectors).

Thus endeth my list of things I want to see fixed in HTML5. I’m leaving out the massive issue of canvas accessibility because:

  1. that’s beyond my area of expertise,
  2. smarter people than me are working on it, and
  3. I think that canvas would probably benefit from being spun off into a separate spec.

There are other little things that bother me in HTML5—hgroup smells funny, cite shouldn’t be restricted to titles of works, and I miss the rev attribute on links—but those are all personal foibles; opinions unsupported by data. I’d rather concede than argue without data.

Because, make no mistake, data is what’s needed if you want to affect change in HTML5. Despite the attempts to paint Hixie as a stubborn, opinionated dictator, he is himself a slave to data. He shows an almost robot-like ability to remove his own ego from a debate and follow where the data leads.

If you are an author of HTML documents, I strongly encourage you to get involved in the HTML5 process.

  1. Read the spec.
  2. Join the mailing list.
  3. Hang out in the IRC channel.

Like I said, most of the spec and discussion is about APIs rather than semantics, but it’s precisely because the spec isn’t directly aimed at authors that authors need to get involved.

The HTML5 Equilibrium

is a strange character with what appears to be a split personality. Hardly surprising then that something so divided would appear to be so divisive.

First of all, there’s the spec itself.

The specification

HTML5 walks a fine line between maintaining backward compatibility with existing markup and forging the way as a modern, updated specification for the future. If it strays too far in paving the cowpaths and simply codifies what authors already publish, then the spec would mandate using tables for layout and font elements for presentation because that’s still what most of the web does. On the other hand, if it drifts too far in the other direction, the result will be something as theoretically pure but as practically useless as .

The result is that HTML5 appears to be a self-contradictory mess. But it’s hard to imagine a successful web technology that isn’t a mess. That’s because the web itself is a mess. Clay Shirky described exactly how messy it is back in 1996:

The server would use neither a persistent connection nor a store-and-forward model, thus giving it all the worst features of both telnet and e-mail.

The hypertext model would ignore all serious theoretical work on hypertext to date. In particular, all hypertext links would be one-directional, thus making it impossible to move or delete a piece of data without ensuring that some unknown number of pointers around the world would silently fail.

And yet the web succeeded because:

…of the various implementations of a worldwide hypertext protocol, we have the worst one possible.

Except, of course, for all the others.

…a reference to Churchill’s oft-cited maxim that:

…democracy is the worst form of government except all the others that have been tried.

Which brings us nicely to the subject of governance and process, another area where HTML5 appears to be split.

The process

Democracy—or at least, consensus—drives the process of most W3C specs. But HTML5 isn’t just being developed at the W3C. HTML5 is also being developed by the WHATWG. Sounds crazy, doesn’t it? The reasons for the split are historical—the W3C rejected HTML as the future of the web in 2004 so the WHATWG started their own work and then the WC3 had a change of heart in 2007. Hence the parallel development. It turns out to be a pretty good system of . The editors on the WHATWG side—Ian Hickson and —are balanced by the chairs on the W3C side—Chris Wilson and Sam Ruby.

The WHATWG process isn’t democratic. There’s no voting on issues. Instead, Hixie acts as a self-described benevolent dictator who decides what goes into and what comes out of the spec. That sounds, frankly, shocking. The idea of one person having so much power should make any right-thinking person recoil. But here’s the real kick in the teeth: it works.

In theory, a democratic process should be the best way to develop an open standard. In practice, it results in a tarpit (see XHTML2, CSS3, and pretty much any other spec in development at the W3C—not that the membership policy of the W3C is any great example of democracy in action).

In theory, an unelected autocrat having control of a specification is abhorrent. In practice, it works really, really well …if it’s the right person.

That’s always been the case with benevolent dictatorships. The populace transfers moral responsibility to the of the state, personified by an-powerful ruler like Shakespeare’s Henry V:

If his cause be wrong, our obedience to the King wipes the crime of it out of us.

That’s a lot of responsibility for one person to carry. Remarkably, Hixie carries the weight with exceptional disinterest and a machine-like even-handedness:

Let it be said that Ian Hickson is the Solomon of web standards; his summary of the situation is mind-bogglingly even-handed and fair-minded.

But it doesn’t always seem that way to those on the outside looking in at the HTML5 process. Debates around the summary attribute or RDFa might well give the impression that Hixie is ignoring voices in opposition. Nothing could be further from the truth. The real challenge is in finding good solutions to problems, and sometimes that doesn’t necessarily mean using an existing solution—despite the pave the cowpaths mantra.

If you have a potential solution to a problem in HTML5, the way to present it is with data. If you don’t have the data, then maybe it isn’t such a good solution. I’ve experienced this myself with the rev attribute, which has been removed from HTML5. I think it should remain. I think it’s a potentially powerful attribute. But I can’t argue with the data. My personal preference isn’t a good enough reason to keep it (‘though I may occasionally tip out some beer to honour its memory).

The irony is that HTML5 has the reputation of being a spec beyond the influence of the average web developer when in fact it has the most open process of any web standard. If you want to have a say in the development in CSS3… well, good luck with that. If you want to have a say in the development of HTML5, you can.

Unlike just about every other W3C activity, the HTML working group is open to the public. You can join in as a public invited expert—except you don’t actually get invited. Instead you’ll need to complete this three-step process:

  1. Sign up for an account with the W3C.
  2. Fill in the invited expert application—you’ll need your W3C credentials to do this.
  3. Fill in the form for joining the HTML working group.

But a lot of the activity on the W3C side of things is very much about the process of developing a spec; voting, consensus and meetings. To have your say in putting the content of the HTML5 spec together, join in the WHATWG activities.

Two words of warning…

Firstly, time is of the essence. HTML5 is due to enter Last Call Working Draft in October of this year. From then it’s going to be a race towards Candidate Recommendation in 2012. If you want to influence this spec, now is the time to get involved. Don’t wait. The FUD around the boogieman date of 2022 is proving to be a particularly virulent meme that web developers have latched on to as an excuse for not caring about HTML5.

Secondly, you might be surprised by the contents of the specification. This is yet another area where HTML5 displays a split personality.

The feature set

Traditionally, HTML has been a format for semantically marking up hypertext—the clue is in the name. That’s still true but HTML5 is also a format for creating web applications; the WHATWG work that became HTML5 started life as a spec for web apps. This means that a lot of the discussions on the mailing list are about APIs, DOM trees, and some fairly code-heavy stuff around canvas and video. Discussions of the semantics of new structural elements like header, footer and article aren’t nearly as prevalent—all the more reason for working web developers to get involved in the process.

The truth is that much of the HTML5 spec isn’t aimed at web developers at all; it’s aimed at browser makers. This has fostered some conspiracy theories about how browser makers are controlling the spec but the truth is that browser makers have always had the final say in what gets implemented. It doesn’t matter how perfect a web standard is if nobody can use it. Hixie, for all his apparent power, acknowledges this:

I don’t want to be writing fiction, I want to be writing a spec that documents the actual behaviour of browsers.

Fortunately we don’t have to wade through a spec aimed at browser makers. Michael Smith has taken all the author-specific parts of HTML5 and published them in a parallel document: HTML5: The Markup Language.

Sam Ruby tells of a useful distinction drawn up by TV Ramen of the kind of new features we’re seeing in HTML5:

extending the platform vs. extending the language.

The first type of feature is something that requires significant effort from browser makers; canvas, video, and all the new APIs. The second type of feature is something that requires very little effort from browser makers but is enormously significant for authors; header, footer, article, etc.

The diagnosis

HTML5 is the web equivalent of a circus tightrope act, performing equal feats of balancing and juggling

  • backward compatibility with shiny new features,
  • W3C consensus with WHATWG benevolent dictatorship,
  • and new browser features with new semantics.

It’s hardly surprising that such a schizophrenic spec can seem so confusing. If you spend some time immersing yourself in the world of HTML5, most of this confusion will evaporate. If symptoms persist, I recommend consulting the HTML5 doctor.