Tags: process

28

sparkline

Designing Progressive Web Apps by Jason Grigsby

It’s the afternoon of the second day of An Event Apart Seattle and Jason is talking about Designing Progressive Web Apps. These are my notes…

Jason wants to talk about a situation you might find yourself in. You’re in a room and in walks the boss, who says “We need a progressive web app.” Now everyone is asking themselves “What is a progressive web app?” Or maybe “How does the CEO even know about progressive web apps?”

Well, trade publications are covering progressive web apps. Lots of stats and case studies are being published. When executives see this kind of information, they don’t want to get left out. Jason keeps track of this stuff at PWA Stats.

Answering the question “What is a progressive web app?” is harder than it should be. The phrase was coined by Frances Berriman and Alex Russell. They listed ten characteristics that defined progressive web apps. The “linkable” and “progressive” characteristics are the really interesting and new characteristics. We’ve had technologies before (like Adobe Air) that tried to make app-like experiences, but they weren’t really of the web. Progressive web apps are different.

Despite this list of ten characteristics, even people who are shipping progressive web apps find it hard to define the damn thing. The definition on Google’s developer site keeps changing. They reduced the characteristics from ten to six. Then it became “reliable, fast, and engaging.” What does that mean? Craigslist is reliable, fast, and engaging—does that mean it’s a progressive web app.

The technical definition is useful (kudos to me, says Jason):

  1. HTTPS
  2. service worker
  3. manifest file

If you don’t have those three things, it’s not a progressive web app.

We should definitely use HTTPS if we want make life harder for the NSA. Also browser makers are making APIs available only under HTTPS. By July, Chrome will mark HTTP sites as insecure. Every site should be under HTTPS.

Service workers are where the power is. They act as a proxy. They allow us to say what we want to cache, what we want to go out to the network for; things that native apps have been able to do for a while. With progressive web apps we can cache the app shell and go to the network for content. Service workers can provide a real performance boost.

A manifest file is simply a JSON file. It’s short and clear. It lists information about the app: icons, colours, etc.

Once you provide those three things, you get benefits. Chrome and Opera on Android will prompt to add the app to the home screen.

So that’s what’s required for progressive web apps, but there’s more to them than that (in the same way there’s more to responsive web design than the three requirements in the baseline definition).

The hype around progressive web apps can be a bit of a turn-off. It certainly was for Jason. When he investigated the technologies, he wondered “What’s the big deal?” But then he was on a panel at a marketing conference, and everyone was talking about progressive web apps. People’s expectations of what you could do on the web really hadn’t caught up with what we can do now, and the phrase “progressive web app” gives us a way to encapsulate that. As Frances says, the name isn’t for us; it’s for our boss or marketer.

Jason references my post about using the right language for the right audience.

Should you have a progressive web app? Well, if you have a website, then the answer is almost certainly “Yes!” If you make money from that website, the answer is definitely “Yes!”

But there’s a lot of FUD around progressive web apps. It brings up the tired native vs. web battle. Remember though that not 100% of your users or customers have your app installed. And it’s getting harder to convince people to install apps. The average number of apps installed per month is zero. But your website is often a customer’s first interaction with your company. A better web experience can only benefit you.

Often, people say “The web can’t do…” but a lot of the time that information is out of date. There are articles out there with outdated information. One article said that progressive web apps couldn’t access the camera, location, or the fingerprint sensor. Yet look at Instagram’s progressive web app: it accesses the camera. And just about every website wants access to your location these days. And Jason knows you can use your fingerprint to buy things on the web because he accidentally bought socks when he was trying to take a screenshot of the J.Crew website on his iPhone. So the author of that article was just plain wrong. The web can do much more than we think it can.

Another common objection is “iOS doesn’t support progressive web apps”. Well, as of last week that is no longer true. But even when that was still true, people who had implemented progressive web apps were seeing increased conversion even on iOS. That’s probably because, if you’ve got the mindset for building a progressive web app, you’re thinking deeply about performance. In many ways, progressive web apps are a trojan horse for performance.

These are the things that people think about when it comes to progressive web apps:

  1. Making it feel like a app
  2. Installation and discovery
  3. Offline mode
  4. Push notifications
  5. Beyond progressive web app

Making it feel like a app

What is an app anyway? Nobody can define it. Once again, Jason references my posts on this topic (how “app” is like “obscenity” or “brunch”).

A lot of people think that “app-like” means making it look native. But that’s a trap. Which operating system will you choose to emulate? Also, those design systems change over time. You should define your own design. Make it an exceptional experience regardless of OS.

It makes more sense to talk in terms of goals…

Goal: a more immersive experience.

Possible solution: removing the browser chrome and going fullscreen?

You can define this in the manifest file. But as you remove the browser chrome, you start to lose things that people rely on: the back button, the address bar. Now you have to provide that functionality. If you move to a fullscreen application you need to implement sharing, printing, and the back button (and managing browser history is not simple). Remember that not every customer will add your progressive web app to their home screen. Some will have browser chrome; some won’t.

Goal: a fast fluid experience.

Possible solution: use an app shell model.

You want smooth pages that don’t jump around as the content loads in. The app shell makes things seem faster because something is available instantly—it’s perceived performance. Basically you’re building a single page application. That’s a major transition. But thankfully, you don’t have to do it! Progressive web apps don’t have to be single page apps.

Goal: an app with personality.

Possible solution: Animated transitions and other bits of UI polish.

Really, it’s all about delight.

Installation and discovery

In your manifest file you can declare a background colour for the startup screen. You can also declare a theme colour—it’s like you’re skinning the browser chrome.

You can examine the manifest files for a site in Chrome’s dev tools.

Once you’ve got a progressive web app, some mobile browsers will start prompting users to add it to their home screen. Firefox on Android displays a little explainer the first time you visit a progressive web app. Chrome and Opera have add-to-homescreen banners which are a bit more intrusive. The question of when they show up keeps changing. They use a heuristic to decide this. The heuristic has been changed a few times already. One thing you should consider is suppressing the banner until it’s an optimal time. Flipkart do this: they only allow it on the order confirmation page—the act of buying something makes it really likely that someone will add the progressive web app to their home screen.

What about app stores? We don’t need them for progressive web apps—they’re on the web. But Microsoft is going to start adding progressive web apps to their app store. They’ve built a site called PWA Builder to help you with your progressive web app.

On the Android side, there’s Trusted Web Activity which is kind of like PhoneGap—it allows you to get a progressive web app into the Android app store.

But remember, your progressive web app is your website so all the normal web marketing still applies.

Offline mode

A lot of organisations say they have no need for offline functionality. But everyone has a need for some offline capability. At the very least, you can provide a fallback page, like Trivago’s offline maze game.

You can cache content that has been recently viewed. This is what Jason does on the Cloud Four site. They didn’t want to make any assumptions about what people might want, so they only cache pages as people browse around the site.

If you display cached information, you might want to display how stale the information is e.g. for currency exchange rates.

Another option is to let people choose what they want to keep offline. The Financial Times does this. They also pre-cache the daily edition.

If you have an interactive application, you could queue tasks and then carry them out when there’s a connection.

Or, like Slack does, don’t let people write something if they’re offline. That’s better than letting someone write something and then losing it.

Workbox is a handy library for providing offline functionality.

Push notifications

The JavaScript for push notifications is relatively easy, says Jason. It’s the back-end stuff that’s hard. That’s because successful push notifications are personalised. But to do that means doing a lot more work on the back end. How do you integrate with preferences? Which events trigger notifications?

There are third-party push notification services that take care of a lot of this for you. Jason has used OneSignal.

Remember that people are really annoyed by push notifications. Don’t ask for permission immediately. Don’t ask someone to marry you on a first date. On Cloud Four’s blog, they only prompt after the user has read an article.

Twitter’s progressive web app does this really well. It’s so important that you do this well: if a user says “no” to your push notification permission request, you will never be able to ask them again. There used to be three options on Chrome: allow, block, or close. Now there are just two: allow or block.

Beyond progressive web apps

There are a lot of APIs that aren’t technically part of progressive web apps but get bundled in with them. Like the Credentials Management API or the Payment Request API (which is converging with ApplePay).

So how should you plan your progressive web app launch? Remember it’s progressive. You can keep adding features. Each step along the way, you’re providing value to people.

Start with some planning and definition. Get everyone in a room and get a common definition of what the ideal progressive web app would look like. Remember there’s a continuum of features for all five of the things that Jason has outlined here.

Benchmark your existing site. It will help you later on.

Assess your current website. Is the site reasonably fast? Is it responsive? Fix those usability issues first.

Next, do the baseline. Switch to HTTPS. Add a manifest file. Add a service worker. Apart from the HTTPS switch, this can all be done on the front end. Don’t wait for all three: ship each one when they’re ready.

Then do front-end additions: pre-caching pages, for example.

Finally, there are the larger initiatives (with more complex APIs). This is where your initial benchmarking really pays off. You can demonstrate the value of what you’re proposing.

Every step on the path to a progressive web app makes sense on its own. Figure out where you want to go and start that journey.

See also:

Design ops for design systems

Leading Design was one of the best events I attended last year. To be honest, that surprised me—I wasn’t sure how relevant it would be to me, but it turned out to be the most on-the-nose conference I could’ve wished for.

Seeing as the event was all about design leadership, there was inevitably some talk of design ops. But I noticed that the term was being used in two different ways.

Sometimes a speaker would talk about design ops and mean “operations, specifically for designers.” That means all the usual office practicalities—equipment, furniture, software—that designers might need to do their jobs. For example, one of the speakers recommended having a dedicated design ops person rather than trying to juggle that yourself. That’s good advice, as long as you understand what’s meant by design ops in that context.

There’s another context of use for the phrase “design ops”, and it’s one that we use far more often at Clearleft. It’s related to design systems.

Now, “design system” is itself a term that can be ambiguous. See also “pattern library” and “style guide”. Quite a few people have had a stab at disambiguating those terms, and I think there’s general agreement—a design system is the overall big-picture “thing” that can contain a pattern library, and/or a style guide, and/or much more besides:

None of those great posts attempt to define design ops, and that’s totally fair, because they’re all attempting to define things—style guides, pattern libraries, and design systems—whereas design ops isn’t a thing, it’s a practice. But I do think that design ops follows on nicely from design systems. I think that design ops is the practice of adopting and using a design system.

There are plenty of posts out there about the challenges of getting people to use a design system, and while very few of them use the term design ops, I think that’s what all of them are about:

Clearly design systems and design ops are very closely related: you really can’t have one without the other. What I find interesting is that a lot of the challenges relating to design systems (and pattern libraries, and style guides) might be technical, whereas the challenges of design ops are almost entirely cultural.

I realise that tying design ops directly to design systems is somewhat limiting, and the truth is that design ops can encompass much more. I like Andy’s description:

Design Ops is essentially the practice of reducing operational inefficiencies in the design workflow through process and technological advancements.

Now, in theory, that can encompass any operational stuff—equipment, furniture, software—but in practice, when we’re dealing with design ops, 90% of the time it’s related to a design system. I guess I could use a whole new term (design systems ops?) but I think the term design ops works well …as long as everyone involved is clear on the kind of design ops we’re all talking about.

Ubiquity and consistency

I keep thinking about this post from Baldur Bjarnason, Over-engineering is under-engineering. It took me a while to get my head around what he was saying, but now that (I think) I understand it, I find it to be very astute.

Let’s take a single interface element, say, a dropdown menu. This is the example Laura uses in her article for 24 Ways called Accessibility Through Semantic HTML. You’ve got two choices, broadly speaking:

  1. Use the HTML select element.
  2. Create your own dropdown widget using JavaScript (working with divs and spans).

The advantage of the first choice is that it’s lightweight, it works everywhere, and the browser does all the hard work for you.

But…

You don’t get complete control. Because the browser is doing the heavy lifting, you can’t craft the details of the dropdown to look identical on different browser/OS combinations.

That’s where the second option comes in. By scripting your own dropdown, you get complete control over the appearance and behaviour of the widget. The disadvantage is that, because you’re now doing all the work instead of the browser, it’s up to you to do all the work—that means lots of JavaScript, thinking about edge cases, and making the whole thing accessible.

This is the point that Baldur makes: no matter how much you over-engineer your own custom solution, there’ll always be something that falls between the cracks. So, ironically, the over-engineered solution—when compared to the simple under-engineered native browser solution—ends up being under-engineered.

Is it worth it? Rian Rietveld asks:

It is impossible to style select option. But is that really necessary? Is it worth abandoning the native browser behavior for a complete rewrite in JavaScript of the functionality?

The answer, as ever, is it depends. It depends on your priorities. If your priority is having consistent control over the details, then foregoing native browser functionality in favour of scripting everything yourself aligns with your goals.

But I’m reminded of something that Eric often says:

The web does not value consistency. The web values ubiquity.

Ubiquity; universality; accessibility—however you want to label it, it’s what lies at the heart of the World Wide Web. It’s the idea that anyone should be able to access a resource, regardless of technical or personal constraints. It’s an admirable goal, and what’s even more admirable is that the web succeeds in this goal! But sometimes something’s gotta give, and that something is control. Rian again:

The days that a website must be pixel perfect and must look the same in every browser are over. There are so many devices these days, that an identical design for all is not doable. Or we must take a huge effort for custom form elements design.

So far I’ve only been looking at the micro scale of a single interface element, but this tension between ubiquity and consistency plays out at larger scales too. Take page navigations. That’s literally what browsers do. Click on a link, and the browser fetches that URL, displaying progress at it goes. The alternative, as exemplified by single page apps, is to do all of that for yourself using JavaScript: figure out the routing, show some kind of progress, load some JSON, parse it, convert it into HTML, and update the DOM.

Personally, I tend to go for the first option. Partly that’s because I like to apply the rule of least power, but mostly it’s because I’m very lazy (I also have qualms about sending a whole lotta JavaScript down the wire just so the end user gets to do something that their browser would do for them anyway). But I get it. I understand why others might wish for greater control, even if it comes with a price tag of fragility.

I think Jake’s navigation transitions proposal is fascinating. What if there were a browser-native way to get more control over how page navigations happen? I reckon that would cover the justification of 90% of single page apps.

That’s a great way of examining these kinds of decisions and questioning how this tension could be resolved. If people are frustrated by the lack of control in browser-native navigations, let’s figure out a way to give them more control. If people are frustrated by the lack of styling for select elements, maybe we should figure out a way of giving them more control over styling.

Hang on though. I feel like I’ve painted a divisive picture, like you have to make a choice between ubiquity or consistency. But the rather wonderful truth is that, on the web, you can have your cake and eat it. That’s what I was getting at with the three-step approach I describe in Resilient Web Design:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Like, say…

  1. The user needs to select an item from a list of options.
  2. Use a select element.
  3. Use JavaScript to replace that native element with a widget of your own devising.

Or…

  1. The user needs to navigate to another page.
  2. Use an a element with an href attribute.
  3. Use JavaScript to intercept that click, add a nice transition, and pull in the content using Ajax.

The pushback I get from people in the control/consistency camp is that this sounds like more work. It kinda is. But honestly, in my experience, it’s not that much more work. Also, and I realise I’m contradicting the part where I said I’m lazy, but that’s why it’s called work. This is our job. It’s not about what we prefer; it’s about serving the needs of the people who use what we build.

Anyway, if I were to rephrase my three-step process in terms of under-engineering and over-engineering, it might look something like this:

  1. Start with user needs.
  2. Build an under-engineered solution—one that might not offer you much control, but that works for everyone.
  3. Layer on a more over-engineered solution—one that might not work for everyone, but that offers you more control.

Ubiquity, then consistency.

eLife goes live

The World Wide Web was forged in the crucible of science. Tim Berners-Lee was working at CERN, the European Centre for Nuclear Research, a remarkable place where the pursuit of knowledge—rather than the pursuit of profit—is the driving force.

I often wonder whether the web as we know it—an open, decentralised system—could’ve been born anywhere else. These days it’s easy to focus on the success stories of the web in the worlds of commerce and social networking, but I still find there’s something that really “clicks” with the web and the science (Zooniverse being a classic example).

At Clearleft we’ve been lucky enough to work on science-driven projects like the Wellcome Library and the Wellcome Trust. It’s incredibly rewarding to work on projects where the bottom line is measured in knowledge-sharing rather than moolah. So when we were approached by eLife to help them with an upcoming redesign, we jumped at the chance.

We usually help organisations through our expertise in user-centred design, but in this case the design and UX were already in hand. The challenge was in the implementation. The team at eLife knew that they wanted a modular pattern library to keep their front-end components documented and easily reusable. Given Clearleft’s extensive experience with building pattern libraries, this was a match made in heaven (or whatever the scientific non-theistic equivalent of heaven is).

A group of us travelled up from Brighton to Cambridge to kick things off with a workshop. Before diving into code, it was important to set out the aims for the redesign, and figure out how a pattern library could best support those aims.

Right away, I was struck by the great working relationship between design and front-end development within eLife—there was a great collaborative spirit to the endeavour.

Some goals for the redesign soon emerged:

  • Promote the HTML reading experience as a 1st choice for readers.
  • Align the online experience with the eLife visual identity.

That led to some design principles:

  • Focus on content not site furniture.
  • Remove visual clutter and provide no more than the user needs at any stage of the experience.
  • Aid discovery of value added content beyond the manuscript.

Those design principles then informed the front-end development process. Together we came up with a priority of concerns:

  1. Access
  2. Maintainability
  3. Performance
  4. Taking advantage of browser capabilities
  5. Visual appeal

It’s interesting that maintainability was such a high priority that it superseded even performance, but we also proposed a hypothesis at the same time:

Maintainability doesn’t negatively impact performance.

The combination of the design principles and priorities led us to formulate approaches that could be used throughout the project:

  • Progressive enhancement.
  • Small-screen first responsive images.
  • Only add libraries as needed.

Then we dived into the tech stack: build tools, version control approaches, and naming methodologies. BEM was the winner there.

None of those decisions were set in stone, but they really helped to build a solid foundation for the work ahead. Graham camped out in Cambridge for a while, embedding himself in the team there as they began the process of identifying, naming, and building the components.

The work continued after Clearleft’s involvement wrapped up, and I’m happy to say that it all paid off. The new eLife site has just gone live. It’s looking—and performing—beautifully.

What a great combination: the best of the web and the best of science!

eLife is a non-profit organisation inspired by research funders and led by scientists. Our mission is to help scientists accelerate discovery by operating a platform for research communication that encourages and recognises the most responsible behaviours in science.

Designing the Patterns Day site

Patterns Day is not one of Clearleft’s slick’n’smooth conferences like dConstruct or UX London. It’s more of a spit’n’sawdust affair, like Responsive Day Out.

You can probably tell from looking at the Patterns Day website that it wasn’t made by a crack team of designers and developers—it’s something I threw together over the course of a few days. I had a lot of fun doing it.

I like designing in the browser. That’s how I ended up designing Resilient Web Design, The Session, and Huffduffer back in the day. But there’s always the initial problem of the blank page. I mean, I had content to work with (the information about the event), but I had no design direction.

My designery colleagues at Clearleft were all busy on client projects so I couldn’t ask any of them to design a website, but I thought perhaps they’d enjoy a little time-limited side exercise in producing ideas for a design direction. Initially I was thinking they could all get together for a couple of hours, lock themselves in a room, and bash out some ideas as though it were a mini hack farm. Coordinating calendars proved too tricky for that. So Jon came up with an alternative: a baton relay.

Remember Layer Tennis? I once did the commentary for a Layer Tennis match and it was a riot—simultaneously terrifying and rewarding.

Anyway, Jon suggested something kind of like that, but instead of a file being batted back and forth between two designers, the file would passed along from designer to designer. Each designer gets one art board in a Sketch file. You get to see what the previous designers have done, leaving you to either riff on that or strike off in a new direction.

The only material I supplied was an early draft of text for the website, some photos of the first confirmed speakers, and some photos I took of repeating tiles when I was in Porto (patterns, see?). I made it clear that I wasn’t looking for pages or layouts—I was interested in colour, typography, texture and “feel.” Style tiles, yes; comps, no.

Jon

Jon’s art board.

Jon kicks things off and immediately sets the tone with bright, vibrant colours. You can already see some elements that made it into the final site like the tiling background image of shapes, and the green-bordered text block. There are some interesting logo ideas in there too, some of them riffing on LEGO, others riffing on illustrations from Christopher Alexander’s book, A Pattern Language. Then there’s the typeface: Avenir Next. I like it.

James G

James G’s art board.

Jimmy G is up next. He concentrates on the tiles idea. You can see some of the original photos from Porto in the art board, alongside his abstracted versions. I think they look great, and I tried really hard to incorporate them into the site, but I couldn’t quite get them to sit with the other design elements. Looking at them now, I still want to get them into the site …maybe I’ll tinker with the speaker portraits to get something more like what James shows here.

Ed

Ed’s art board.

Ed picks up the baton and immediately iterates through a bunch of logo ideas. There’s something about the overlapping text that I like, but I’m not sure it fits for this particular site. I really like the effect of the multiple borders though. With a bit more time, I’d like to work this into the site.

James B

Batesy’s art board.

Batesy is the final participant. He has some other nice ideas in there, like the really subtle tiling background that also made its way into the final site (but I’ll pass on the completely illegible text on the block of bright green). James works through two very different ideas for the logo. One of them feels a bit too busy and chaotic for me, but the other one …I like it a lot.

I immediately start thinking “Hmm …how could I make this work in a responsive way?” This is exactly the impetus I needed. At this point I start diving into CSS. Not only did I have some design direction, I’m champing at the bit to play with some of these ideas. The exercise was a success!

Feel free to poke around the Patterns Day site. And while you’re there, pick up a ticket for the event too.

Code (p)reviews

I’m not a big fan of job titles. I’ve always had trouble defining what I do as a noun—I much prefer verbs (“I make websites” sounds fine, but “website maker” sounds kind of weird).

Mind you, the real issue is not finding the right words to describe what I do, but rather figuring out just what the heck it is that I actually do in the first place.

According to the Clearleft website, I’m a technical director. That doesn’t really say anything about what I do. To be honest, I tend to describe my work these days in terms of what I don’t do: I don’t tend to write a lot of HTML, CSS, and JavaScript on client projects (although I keep my hand in with internal projects, and of course, personal projects).

Instead, I try to make sure that the people doing the actual coding—Mark, Graham, and Danielle—are happy and have everything they need to get on with their work. From outside, it might look like my role is managerial, but I see it as the complete opposite. They’re not in service to me; I’m in service to them. If they’re not happy, I’m not doing my job.

There’s another aspect to this role of technical director, and it’s similar to the role of a creative director. Just as a creative director is responsible for the overall direction and quality of designs being produced, I have an oversight over the quality of front-end output. I don’t want to be a bottleneck in the process though, and to be honest, most of the time I don’t do much checking on the details of what’s being produced because I completely trust Mark, Graham, and Danielle to produce top quality code.

But I feel I should be doing more. Again, it’s not that I want to be a bottleneck where everything needs my approval before it gets delivered, but I hope that I could help improve everyone’s output.

Now the obvious way to do this is with code reviews. I do it a bit, but not nearly as much as I should. And even when I do, I always feel it’s a bit late to be spotting any issues. After all, the code has already been written. Also, who am I to try to review the code produced by people who are demonstrably better at coding than I am?

Instead I think it will be more useful for me to stick my oar in before a line of code has been written; to sit down with someone and talk through how they’re going to approach solving a particular problem, creating a particular pattern, or implementing a particular user story.

I suppose it’s really not that different to rubber ducking. Having someone to talk out loud with about potential solutions can be really valuable in my experience.

So I’m going to start doing more code previews. I think it will also incentivise me to do more code reviews—being involved in the initial discussion of a solution means I’m going to want to see the final result.

But I don’t think this should just apply to front-end code. I’d also like to exercise this role as technical director with the designers on a project.

All too often, decisions are made in the design phase that prove problematic in development. It usually works out okay, but it often means revisiting the designs in light of some technical considerations. I’d like to catch those issues sooner. That means sticking my nose in much earlier in the process, talking through what the designers are planning to do, and keeping an eye out for any potential issues.

So, as technical director, I won’t be giving feedback like “the colour’s not working for me” or “not sure about those type choices” (I’ll leave that to the creative director), but instead I can ask questions like “how will this work without hover?” or “what happens when the user does this?” as well as pointing out solutions that might be tricky or time-consuming to implement from a technical perspective.

What I want to avoid is the swoop’n’poop, when someone seagulls in after something has been designed or built and points out all the problems. The earlier in the process any potential issues can be spotted, the better.

And I think that’s my job.

Design sprinting

James and I went to Ipswich last week for work. But this wasn’t part of an ongoing project—this was a short intense one-week feasibility study.

Leon from Suffolk Libraries got in touch with us about a project they’re planning to carry out soon: replacing their self-service machines with something more up-to-date. But rather than dive into commissioning the project straight away, he wisely decided to start with a one-week sprint to figure out exactly what the project would need to go ahead.

So that’s what James and I did. It was somewhat similar to the design sprint popularised by GV. We ensconced ourselves in the Ipswich library and packed a whole lot of work into five days. There was lots of collaboration, lots of sketching, lots of iterative design, and some rough’n’ready code. It was challenging, but a lot of fun. Also: we stayed in a pretty sweet AirBnB.

Our home for the week. This is a nice AirBnB.

You can read all about it in our case study. You can also read all about from Leon’s point of view on his blog:

I can’t recommend this kind of research sprint enough. We got a report, detailed technical validation of an idea, mock ups and a plan for how to proceed, while getting staff and stakeholders involved in the project – all in the space of 5 days.

I think this approach makes a lot of sense. By the end of the week, James and I felt pretty confident about estimating times and costs for the full project. Normally trying to estimate that kind of thing can be a real guessing game. But with the small of investment of one week’s worth of effort, you get a whole lot more certainty and confidence.

Have a look for yourself.

Where to start?

A lot of the talks at this year’s Chrome Dev Summit were about progressive web apps. This makes me happy. But I think the focus is perhaps a bit too much on the “app” part on not enough on “progressive”.

What I mean is that there’s an inevitable tendency to focus on technologies—Service Workers, HTTPS, manifest files—and not so much on the approach. That’s understandable. The technologies are concrete, demonstrable things, whereas approaches, mindsets, and processes are far more nebulous in comparison.

Still, I think that the most important facet of building a robust, resilient website is how you approach building it rather than what you build it with.

Many of the progressive app demos use server-side and client-side rendering, which is great …but that aspect tends to get glossed over:

Browsers without service worker support should always be served a fall-back experience. In our demo, we fall back to basic static server-side rendering, but this is only one of many options.

I think it’s vital to not think in terms of older browsers “falling back” but to think in terms of newer browsers getting a turbo-boost. That may sound like a nit-picky semantic subtlety, but it’s actually a radical difference in mindset.

Many of the arguments I’ve heard against progressive enhancement—like Tom’s presentation at Responsive Field Day—talk about the burdensome overhead of having to bolt on functionality for older or less-capable browsers (even Jake has done this). But the whole point of progressive enhancement is that you start with the simplest possible functionality for the greatest number of users. If anything gets bolted on, it’s the more advanced functionality for the newer or more capable browsers.

So if your conception of progressive enhancement is that it’s an added extra, I think you really need to turn that thinking around. And that’s hard. It’s hard because you need to rewire some well-engrained pathways.

There is some precedence for this though. It was really, really hard to convince people to stop using tables for layout and starting using CSS instead. That was a tall order—completely change the way you approach building on the web. But eventually we got there.

When Ethan came out with Responsive Web Design, it was an equally difficult pill to swallow, not because of the technologies involved—media queries, percentages, etc.—but because of the change in thinking that was required. But eventually we got there.

These kinds of fundamental changes are inevitably painful …at first. After years of building websites using tables for layout, creating your first CSS-based layout was demoralisingly difficult. But the second time was a bit easier. And the third time, easier still. Until eventually it just became normal.

Likewise with responsive design. After years of building fixed-width websites, trying to build in a fluid, flexible way was frustratingly hard. But the second time wasn’t quite as hard. And the third time …well, eventually it just became normal.

So if you’re used to thinking of the all-singing, all-dancing version of your site as the starting point, it’s going to be really, really hard to instead start by building the most basic, accessible version first and then work up to the all-singing, all-dancing version …at first. But eventually it will just become normal.

For now, though, it’s going to take work.

The recent redesign of Google+ is true case study in building a performant, responsive, progressive site:

With server-side rendering we make sure that the user can begin reading as soon as the HTML is loaded, and no JavaScript needs to run in order to update the contents of the page. Once the page is loaded and the user clicks on a link, we do not want to perform a full round-trip to render everything again. This is where client-side rendering becomes important — we just need to fetch the data and the templates, and render the new page on the client. This involves lots of tradeoffs; so we used a framework that makes server-side and client-side rendering easy without the downside of having to implement everything twice — on the server and on the client.

This took work. Had they chosen to rely on client-side rendering alone, they could have built something quicker. But I think it was worth laying that solid foundation. And the next time they need to build something this way, it’s going to be less work. Eventually it just becomes normal.

But it all starts with thinking of the server-side rendering as the default. Server-side rendering is not a fallback; client-side rendering is an enhancement.

That’s exactly the kind of mindset that enables Jack Franklin to build robust, resilient websites:

Now we’ll build the React application entirely on the server, before adding the client-side JavaScript right at the end.

I had a chance to chat briefly with Jack at the Edge conference in London and I congratulated him on the launch of a Go Cardless site that used exactly this technique. He told me that the decision to flip the switch and make it act as a single page app came right at the end of the project. Server-side rendering was the default; client-side rendering was added later.

The key to building modern, resilient, progressive sites doesn’t lie in browser technologies or frameworks; it lies in how we think about the task at hand; how we approach building from the ground up rather than the top down. Changing the way we fundamentally think about building for the web is inevitably going to be challenging …at first. But it will also be immensely rewarding.

Polyfills and products

I was chatting about polyfills recently with Bruce and Remy—who coined the term:

A polyfill, or polyfiller, is a piece of code (or plugin) that provides the technology that you, the developer, expect the browser to provide natively. Flattening the API landscape if you will.

I mentioned that I think that one of the earliest examples of what we would today call a polyfill was the IE7 script by Dean Edwards.

Dean wrote this (amazing) piece of JavaScript back when Internet Explorer 6 was king of the hill and Microsoft had stopped development of their browser entirely. It was a pretty shitty time in browserland back then. While other browsers were steaming ahead with browser support, Dean’s script pulled IE6 up by its bootstraps and made it understand CSS2.1 features. Crucially, you didn’t have to write your CSS any differently for the IE7 script to work—the classic hallmark of a polyfill.

Scott has a great post over on the Filament Group blog asking To Picturefill, or not to Picturefill?. Therein, he raises the larger issue of when to use polyfills of any kind. After all, every polyfill you use is a little bit of a tax that the end user must pay with a download.

Polyfills typically come at a cost to users as well, since they require users to download and execute JavaScript in order to work. Sometimes, frequently even, that cost outweighs the benefits that the polyfill would bring. For that reason, the question of whether or not to use any polyfill should be taken seriously.

Scott takes a very thoughtful approach to using any polyfill, and I try to do the same. I feel that it’s important to have an exit strategy for every polyfill you decide to use. After all, the whole point of a polyfill is that it’s a stop-gap measure until a particular feature is more widely supported.

And that’s where I run into one of the issues of working at an agency. At Clearleft, our time working with a client usually lasts a few months. At the end of that time, we’ll have delivered whatever the client needs: sometimes that’s design work; sometimes its design and a front-end pattern library.

Every now and then we get to revisit a project—like with Code for America—but that’s the exception rather than the rule. We’ve had to get very, very good at handover precisely because we won’t be the ones maintaining the code that we deliver (though we always try to budget in time to revisit the developers who are working with the code to answer any questions they might have).

That makes it very tricky to include a polyfill in our deliverables. We’d need to figure out a way of also including a timeline for revisiting that polyfill and evaluating when it’s time to drop it. That’s not an impossible task, but it’s much, much easier if you’re a developer working on a product (as opposed to a developer working at an agency). If you’re going to be the same person working on the code in the future—as well as working on it right now—it gets a lot easier to plan for evaluating polyfill usage further down the line. Set a recurring item in your calendar and you should be all set.

It’s a similar situation with vendor prefixes. Vendor prefixes were never intended to be a long-lasting part of any style sheet. Like polyfills, they’re supposed to be used with an exit strategy in mind: when the time is right, remove the prefixed styles, leaving only the unprefixed standardised CSS. Again, that’s a lot easier to do if you’re working on a product and you know that you’ll be the one revisiting the CSS later on. That’s harder to do at an agency where you’re handing over CSS to someone else.

I’m quite reluctant to use any vendor prefixes at all—which is at should be; vendor prefixes should not be used lightly. Sometimes they’re unavoidable, but that shouldn’t stop us thinking about how to remove them at a later date.

I’m mostly just thinking out loud here. I guess my point is that certain front-end development techniques and technologies feel like they’re better suited to product work rather than agency work. Although I’m sure there are plenty of counter-examples out there too of tools that really fit the agency model and are less useful for working on the same product over a long period.

But even though the agency world and the product world are very different in lots of ways, both of them require us to think about the future. How will long will the code you’re writing today last? And do you have a plan for when it needs updating or replacing?

Responding

Last week I had responsive-themed tour of London.

On Tuesday I went up to Chelsea to spend the day workshopping with some people at Education First. It all went rather splendidly, I’m happy to report.

It was an interesting place. First of all, there’s the office building itself. Once owned by News International, it has a nice balance between open-plan and grouped areas. Then there’s the people. Just 20% of them are native English speakers. It was really nice to be in such a diverse group.

The workshop attendees represented a good mix of skills too: UX, front-end development, and visual design were at the forefront, but project management and content writing were also represented. That made the exercises we did together very rewarding.

I was particularly happy that the workshop wasn’t just attended by developers or designers, seeing as one of the messages I was hammering home all day was that responsive web design affects everyone at every stage of a project:

Y’see, it’s my experience that the biggest challenges of responsive design (which, let’s face it, now means web design) are not technology problems. Sure, we’ve got some wicked problems when dealing with non-flexible media like bitmap images, which fight against the flexible nature of the web, but thanks to the work of some very smart and talented people, even those kinds of issues are manageable.

No, the biggest challenges, in my experience, are to do with people. Specifically, the way that people work together.

On Thursday evening, I reiterated that point at The Digital Pond event in Islington …leading at least one person in the audience to declare that they were having an existential crisis (not my intention, honest).

I also had the pleasure of hearing Sally give her take on responsive design. She was terrific at Responsive Day Out 2 and she was, of course, terrific here again. If you get the chance to see her speak, take it.

There should be videos from Digital Pond available at some point, so you’ll be able to catch up with our talks then.

Notes from the edge

I went up to London for the Edge Conference on Friday. It’s not your typical conference. Instead of talks, there are panels, but not the crap kind, where nobody says anything of interest: these panels are ruthlessly curated and prepared. There’s lots of audience interaction too, but again, not the crap kind, where one or two people dominate the discussion with their own pet topics: questions are submitted ahead of time, and then you are called upon to ask it at the right moment. It’s like Question Time for the web.

Components

The first panel was on that hottest of topics: Web Components. Peter Gasston kicked it off with a superb introduction to the subject. Have a read of his equally-excellent article in Smashing Magazine to get the gist.

Needless to say, this panel covered similar ground to the TAG meetup I attended a little while back, and left me with similar feelings: I’m equal parts excited and nervous; optimistic and worried. If Web Components work out, and we get a kind emergent semantics of UI widgets, it’ll be a huge leap forward for the web. But if we end up with a Tower of Babel, things could get very messy indeed. We’ll probably get both at once. And I think that’ll be (mostly) okay.

I butted into the discussion when the topic of accessibility came up. I was a little worried about what I was hearing, which was mainly, “Oh, ARIA takes care of the accesibility.” I felt like Web Components were passing the buck to ARIA, which would be fine if it weren’t for the fact that ARIA can’t cover all the possible use-cases of Web Components.

I chatted about this with Derek and Nicole during the break, but I’m not sure if I was articulating my thoughts very well, so I’ll have another stab at it here:

Let me set the scene for Web Components…

Historically, HTML has had a limited vocubalary for expressing interface widgets—mostly a bunch of specialised form fields like, say, the select element. The plus side is that there’s a consensus of understanding among the browsers, so you don’t have to explain what a select element does; the browsers already know. The downside is that whenever we want to add a new interface element like input type="range", it takes time to get into browsers and through the standards process. Web Components allow you to conjure up interface elements, and you don’t have to lobby browser makers or standards groups in order to make browsers understand your newly-minted element: you provide all the behavioural and styling instructions in one bundle.

So Web Components make use of HTML, JavaScript, and (scoped) CSS. The possibility space for the HTML is infinite: if you need an element that doesn’t exist, you just invent it. The possibility space for the JavaScript is pretty close to infinite: it’s a Turing-complete language that can be wrangled to do just about anything. The possibility space for CSS isn’t infinite, but it’s pretty darn big: there’s not much you can’t do with it at this point.

What’s missing from that bundle of HTML, JavaScript, and CSS are hooks for assistive technology. Up until now, this is something we’ve mostly left to the browser. We don’t have to include any hooks for assistive technology when we use a select element because the browser knows what it is and can expose that knowledge to the assistive technology. If we’re going to start making up our own interface elements, we now have to take on the responsibility of providing that information to assistive technology.

How do we that? Well, right now, our only option is to use ARIA …but the possibility space defined by ARIA is much, much smaller than HTML, JavaScript, or CSS.

That’s not a criticism of ARIA: that’s the way it was designed. It’s a reactionary technology, designed to plug the gaps where the native semantics of HTML just don’t cut it. The vocabulary of ARIA was created by looking at the kinds of interface elements people are making—tabs, sliders, and so on. That’s fine, but it can’t scale to keep pace with Web Components.

The problem that Web Components solve—the fact that it currently takes too long to get a new interface element into browsers—doesn’t have a corresponding solution when it comes to accessibility hooks. Just adding more and more predefined ARIA roles won’t cut it—we need some kind of extensible accessibility that matches the expressive power of Web Components. We don’t need a bigger vocabulary in ARIA, we need a way to define our own vocabulary—an extensible ARIA, if you will.

Hmmm… I’m still not sure I’m explaining myself very well.

Anyway, I just want to make sure that accessibility doesn’t get left behind (again!) in our rush to create a new solution to our current problems. With Web Components still in their infancy, this feels like the right time to raise these concerns.

That highlights another issue, one that Nicole picked up on. It’s really important that the extensible web community and the accessibility community talk to each other.

Frankly, the accessibility community can be its own worst enemy sometimes. So don’t get me wrong: I’m not bringing up my concerns about the accessibility of Web Components in order to cry “fail!”—I just want to make sure that it’s on the table (and I’m glad that Alex is one of the people driving Web Components—his history with Dojo reassures me that we can push the boundaries of interface widgets on the web without leaving accessibility behind).

Anyway …that’s enough about that. I haven’t mentioned all the other great discussions that took place at Edge Conference.

Developer Tooling

The Web Components panel was followed by a panel on developer tools. This was dominated by representatives from different browsers, each touting their own set of in-browser tools. But the person who I really wanted to rally behind was Kenneth Auchenberg. He quite rightly asks why our developer tools and our text editors are two different apps. And rather than try to put text editors into developer tools, what we really want is to pull developer tools into our text editors …all the developer tools from all the browsers, not just one set of developer tools from one specific browser.

If you haven’t seen Kenneth’s presentation from Full Frontal, I urge you to watch it or listen to it.

I had my hand up to jump into the discussion towards the end, but time ran out so I didn’t get a chance. Paul came over afterwards and asked what I was going to say. Here’s what I told him…

I’m fascinated by the social dynamics around how browsers get made. This is an area where different companies are simultaneously collaborating and competing.

Broadly speaking, the feature set of a web browser can be divided into two buckets:

In one bucket, you’ve got the support for standards like HTML, CSS, JavaScript. Now, individual browsers might compete on how quickly or how thoroughly they get those standards implemented, but at this point, there’s no disagreement about the fact that proprietary crap is bad, standards are good, and that no matter how painful the process can be, browser makers all need to get together and work on standards together. Heck, even Apple can’t avoid collaborating on this stuff.

In the other bucket, you’ve got all the stuff that browsers compete against each other with: speed, security, the user interface, etc. A lot of this takes place behind closed doors, and that’s fine. There’s no real need for browser makers to collaborate on this stuff, and it could even hurt their competetive advantage if they did collaborate.

But here’s the problem; developer tools seem to be coming out of that second bucket instead of the first. There doesn’t seem to be much communication between the browser makers on developer tools. That’s fine if you see developer tools as an opportunity for competition, but it’s lousy if you see developer tools as an opportunity for interoperability.

This is why Kenneth’s work is so important. He’s crying out for more interoperability between browsers when it comes to developer tools. Why can’t they all use the same low-level APIs under the hood? Then they can still compete on how pretty their dev tools look, without making life miserable for developers who want to move quickly between browsers.

As painful as it might be, I think that browser makers should get together in some semi-formalised way to standardise this stuff. I don’t think that the W3C or the WHATWG are necessarily the right places for this kind of standardisation, but any kind of official cooperation would be good.

Build Process

The panel on build processes for front-end development kicked off with Gareth saying a few words. Some of those words included the sentence:

Make is probably older than you.

Cue glares from me and Scott.

Gareth also said that making websites means making software. We’re all making software—live with it.

This made me nervous. I’ve always felt that one of the great strengths of the web has been its low barrier to entry. The idea of a web that can only be made by qualified software developers doesn’t sound like a good thing to me.

Fortunately, things got cleared up later on. Somebody else asked a question about whether the barrier to entry was being raised by the complexity of tools like preprocessors, compilers, and transpilers. The consensus of the panel was that these are power tools for power users. So if someone were learning to make a website from scratch, you wouldn’t start them off with, say, Sass, without first learning CSS.

It was a fun panel, made particulary enjoyable by the presence of Kyle Simpson. I like the cut of his jib. Alas, I didn’t get the chance to tell him that in person. I had to duck out of the afternoon’s panels to get back to Brighton due to unforeseen family circumstances. But I did manage to catch some of the later panels on the live stream.

Closing thoughts

A common thread I noticed amongst many of the panels was a strong bias for decantralisation, rather than collaboration. That was most evident with Web Components—the whole point is that you can make up your own particular solution rather than waiting for a standards body. But it was also evident in the Developer Tools line-up, where each browser maker is reinventing the same wheels. And when it came to Build Process, it struck me that everyone is scratching their own itch instead of getting together to work on an itch solution.

There’s nothing wrong with that kind of Darwinian approach to solving our problems, but it does seem a bit wasteful. Mairead Buchan was at Edge Conference too and she noticed the same trend. Sounds like she’s going to do something about it too.

Launching for America

I’ve already written a bit about the process of working with Code for America, which has been an absolute pleasure. Just today, Jon described it as “the closest thing to a dream project.”

I concur. Not only did the client communication work out really well, but their willingness to share the pattern library we put together warms the cockles of my heart.

When Clearleft’s part in the project officially wrapped up, I wrote:

It’ll be a while yet before the new site rolls out.

That was exactly one month ago.

The new Code for America website went live last Friday.

I’m impressed! That’s a pretty short timescale to rebuild a fairly large website, not only changing the front-end codebase, but also switching out the back-end stack as well. They must’ve been working flat out.

I’ve worked on projects in the past where my initial excitement at the project’s wrap diminished as the site launch date slipped further and further over the horizon of the future. It isn’t unusual to have a gap of many months between the end of Clearleft’s time on a project and seeing the site go live. I’m really happy that the Code for America project bucks that trend.

Climbing Mount Responsive

I’m back from Munich, where I spent three solid days workshopping with AutoScout24. I’m happy to report that it went really, really well. It’s restored my confidence after the negative feedback I got in Tel Aviv.

Three days is quite a long time to spend workshopping, so I was mostly winging it. But that extended period also allowed us to dive deep into specific issues and questions (all the usual suspects: how to handle navigation, images, complex interactions, etc.).

The real issues, however, were much more “bigger picture”—how to handle the transition to responsive of a big desktop-centric site that’s been growing for over a decade. By the end of the three days, we had divided the options into three groups:

  1. Start making any new pages and sections of the site responsive. After a while of doing that, the team would develop a pretty good feeling of what it would take to then go back and retrofit what’s already online. The downside of this approach is that would provide an asynchronous user experience: users would be moving from responsive to non-responsive parts of the site, which could be confusing.
  2. Leave the current fixed-width grid as it is, but focus on making all the components of the page flexible. Once all the components are fluid, then it should be a matter of switching over to a fluid grid in one fell swoop. On the plus side, this means that the whole site would then be responsive. On the negative side, until all the components have been made flexible—which could take some time—the site remains rigidly fixed-width and desktop-centric.
  3. Rebuild the mobile site, using it as a seed from which to grow a new responsive site. On the face of it, having a separate mobile subdomain might seem like a millstone around your neck if your trying to push for a responsive design. In practice though, it can be enormously useful. Mostly it’s a political issue: whereas ripping out the desktop site and starting from scratch is a huge task that would require everyone’s buy-in, nobody gives a shit about the mobile subdomain. Both the BBC news team and The Guardian are having great success with this approach, building mobile-first responsive sites bit-by-bit on the m. subdomain, with the plan to one day flip the switch and make the subdomain the main site. The downside is that until the switch is flipped, you’ve still got to deal with redirecting mobile traffic—probably using some nasty user-agent sniffing—and all the issues that come with having your content appearing at more than one URL.

There’s no doubt about it: trying to apply responsive design to large-scale existing desktop-centric sites is really, really hard. The message I keep repeating in my workshops is that you can’t expect to just sprinkle on some magic media-query fairydust—it just doesn’t work that way. Instead, you’ve got to figure out a way to reframe all your challenges into a mobile-first way of thinking.

Instead of asking “How can I make these patterns (mega-menus, lightboxes, complex data tables) work when the screen size shrinks?”, you need to ask “What’s the problem they’re supposed to be solving, and how would I design a solution for the small screen to start with?” Once you’ve done that, then it becomes a matter of scaling up to the large screen …which is actually a much simpler problem space.

As is so often the case with web design, it requires the application of progressive enhancement. In the case of responsive design, that means starting with small-screen styles, small-screen images, and small-screen content priority. Then you can progressively enhance with layout styles, larger images, and conditional loading of nice-to-have extra content. Oh, and you absolutely have to accept and embrace the fact that websites do not need to look the same in every browser.

Making that change in thinking can be hugely challenging.

Remember when we were all making websites with tables for layout? Then the web standards movement came along, pushing for the separation of structure and presenation, urging us to use CSS for layout. It took the brain-rewiring power of the CSS Zen Garden to really give people that “A-ha!” moment.

Mobile-first responsive design requires a similar rewiring of the brain. And if you’re used to doing things a certain way, then it’s natural to resist such drastic change—although as Elliot pointed out at the Responsive Day Out, when you first make the switch it might be very tricky, but it gets easier and easier with each project.

Still, it can be a difficult message to hear. I suspect that’s why my workshop in Tel Aviv wasn’t so warmly received—I didn’t provide any easy answers.

The designers and developers at AutoScout24 also didn’t find it easy to accept how much they’d have to rethink their approach, but by the end of the three days they had a much clearer idea of how they could go about making that change. I’m really curious to see where they’ll go from here. Personally, I’m very optimistic about their prospects for successfully pulling off a large-scale responsive relaunch.

There are two main reasons for my optimism:

  1. They’ve already put together a front-end styleguide; a UI library of components. The fact that they’re already thinking about breaking things down into their component parts is a terrific approach (and they also said they’re planning to make their UI library public, which makes me very happy indeed).
  2. Developers, designers, and information architects work side by side. The web department works in teams, but those teams aren’t organised by job role. Instead each small team of 4 or 5 people has a product manager, a UX designer, visual designer, and a developer or two.

I can’t emphasise enough how important that kind of collaborative environment is.

I’ve said it before, and I’ll say it again; the biggest challenges of responsive design are not technology problems.:

No, the biggest challenges, in my experience, are to do with people. Specifically, the way that people work together.

I’ve spoken to some companies who were eager to make the switch to responsive design, but who have designers and developers sitting in different rooms, or on different floors, or buildings, or even countries. That’s when my heart sinks. Trying to work in the iterative way that a good responsive project demands is going to be massively difficult—if not downright impossible—in that environment.

So I’m pretty confident that if the designers and developers at AutoScout24 put their minds to it, they can rise to the enormous challenge that lies ahead of them. They’ve got the right working environment, they’ve got a UI library, and they’ve got the option of using their exising mobile subdomain. Most of all, they’ve demonstrated a willingness to accept all the challenges that come with changing from a desktop-centric to a content-first mindset.

All in all, it was a very productive three days in Munich. It was hard work, but then again, I had the option of rewarding myself with some excellent Bavarian food and beer each evening.

Abendessen

Communication for America

Mandy has written a great article about making remote teams work. It’s an oft-neglected aspect of working on a product when you’ve got people distributed geographically.

But remote communication isn’t just something that’s important for startups and product companies—it’s equally important for agencies when it comes to client communication.

At Clearleft, we occasionally work with clients right here in Brighton, but that’s the exception. More often than not, the clients are based in London, or somewhere else in the UK. In the case of Code for America, they’re based in San Francisco—that’s eight or nine timezones away (depending on the time of year).

As it turned out, it wasn’t a problem at all. In fact, it worked out nicely. At the end of every day, we had a quick conference call, with two or three people at our end, and two or three people at their end. For us, it was the end of the day: 5:30pm. For them, the day was just starting: 9:30am.

We’d go through what we had been doing during that day, ask any questions that had cropped up over the course of the day, and let them know if there was anything we needed from them. If there was anything we needed from them, they had the whole day to put it together while we went home. The next morning (from our perspective), it would be waiting in our in/drop-boxes.

Meanwhile, from the perspective of Code for America, they were coming into the office every morning and starting the day with a look over our work, as though we had beavering away throughout the night.

Now, it would be easy for me to extrapolate from this that this way of working is great and everyone should do it. But actually, the whole timezone difference was a red herring. The real reason why the communication worked so well throughout the project was because of the people involved.

Right from the start, it was clear that because of time and budget constraints that we’d have to move fast. We wouldn’t have the luxury of debating everything in detail and getting every decision signed off. Instead we had a sort of “rough consensus and running code” approach that worked really well. It worked because everyone understood that was what was happening—if just one person was expecting a more formalised structure, I’m sure it wouldn’t have gone quite so smoothly.

So we provided materials in whatever level of fidelity made sense for the idea under discussion. Sometimes that was a quick sketch. Sometimes it was a fairly high-fidelity mockup. Sometimes it was a module of markup and CSS. Whatever it took.

Most of all, there was a great feeling of trust on both sides of the equation. It was clear right from the start that the people at Code for America were super-smart and weren’t going to make any outlandish or unreasonable requests of Clearleft. Instead they gave us just the right amount of guidance and constraints, while trusting us to make good decisions.

At one point, Jon was almost complaining about not getting pushback on his designs. A nice complaint to have.

Because of the daily transatlantic “stand up” via teleconference, there was a great feeling of inevitability to the project as it came together from idea to execution. Inevitability doesn’t sound like a very sexy attribute of a web project, but it’s far preferable to the kind of project that involves milestones of “big reveals”—the Mad Men approach to project management.

Oh, and we made sure that we kept those transatlantic calls nice and short. They never lasted longer than 10 or 15 minutes. We wanted to avoid the many pitfalls of conference calls.

Pattern sharing

Mike has written about the Code for America alpha website that we collaborated on:

We chose to work with ClearLeft because they develop a pattern portfolio (a pattern/style library) which would allow us to scale our work to our Brigades. This unique approach has aligned perfectly with our work style and decentralized organizational structure.

Thankfully, I think the approach of delivering a pattern portfolio (instead of just pages) isn’t so unique these days. Mind you, it still seems to be more common with in-house teams than agencies. The Mailchimp pattern library is a classic example.

But agencies like Paravel are—like Clearleft—delivering systems, not pages. Dave wrote about providing responsive deliverables:

Responsive deliverables should look a lot like fully-functioning Twitter Bootstrap-style systems custom tailored for your clients’ needs.

I think that’s a good way of looking at it: a Bootstrap for every project.

Here’s the front-end style guide for Code for America.

Usually these front-end deliverables will be password-protected on the Clearleft extranet for the client’s eyes only, but Code for America are all about openness, so they’re more than willing to let us share it with the world. That makes me very happy. I remember encouraging the guys at Starbucks to publish their front-end style guide and I’ve written about this spirit of sharing before:

These style guides and pattern libraries aren’t being published in an attempt to provide ready-made solutions—every project should have its own distinct pattern library. Instead, these pattern libraries are being published in a spirit of openness and sharing …a way of saying “Hey, this is what worked for us in these particular circumstances.”

If you’re poking around the Code for America style guide, you’ll notice that it borrows some ideas from the pattern primer idea I published a while back. But in this iteration, the markup is available via a toggle—a nice variation. There’s also a patchwork page that provides a nice glance-able uninterrupted view of the same patterns.

Every project is a learning experience and each front-end style guide gives us ideas about how to do the next one better. In fact, Mark is busy working on better internal tools for creating these kinds of deliverables—something we’ll definitely be sharing. In the meantime, I’ll be encouraging other clients to be as open as Code for America have been in allowing us to share these deliverables.

For more on the usefulness of front-end style guides, be sure to read Paul’s article on style guides for the web, Anna’s classic 24 Ways article, and of course, Anna’s pocket guide from Five Simple Steps.

Coding for America

Back when I was wandering around America in August, I mentioned that I met up with Mike Migurski in San Francisco:

I played truant from UX Week this morning to meet up with Mike for a coffee and a chat at Cafe Vega. We were turfed out when the bearded, baseball-capped, Draplinesque barista announced he had to shut the doors because he needed to “run out for some milk.” So we went around the corner to the Code For America office.

It wasn’t just a social visit. Mike wanted to chat about the possibility of working with Clearleft. The Code for America site was being overhauled. The new site needed to communicate directly with volunteers, rather than simply being a description of what Code for America does. But the site also needed to be able to change and adapt as the organisation’s activities expanded. So what they needed was not a set of page designs; they needed a system of modular components that could be assembled in a variety of ways.

This was music to my ears. This sort of systems-thinking is exactly the kind of work that Clearleft likes to get its teeth into. I showed Mike some of the previous work we had done in creating pattern libraries, and it became pretty clear that this was just what they were looking for.

When I got back to Brighton, Clearleft assembled as small squad to work on the project. Jon would handle the visual design, with the branding work of Dojo4 as a guide. For the front-end coding, we brought in some outside help. Seeing as the main deliverable for this project was going to be a front-end style guide, who better to put that together than the person who literally wrote the book on front-end style guides: Anna.

I’ll go into more detail about the technical side of things on the Clearleft blog (and we’ll publish the pattern library), but for now, let me just say that the project was a lot of fun, mostly because the people we were working with at Code for America—Mike, Dana, and Cyd—were so ridiculously nice and easy-going.

Anna and Jon would start the day by playing the unofficial project theme song and then get down to working side-by-side. By the end of the day here in Brighton, everyone was just getting started in San Francisco. So the daily “stand up” conference call took place at 5:30pm our time; 9:30am their time. The meetings rarely lasted longer than 10 or 15 minutes, but the constant communication throughout the project was invaluable. And the time difference actually worked out quite nicely: we’d tell them what we had been working on during our day, and if we needed anything from them; then they could put that together during their day so it was magically waiting for us by the next morning.

It’ll be a while yet before the new site rolls out, but in the meantime they’ve put together an alpha site—with a suitably “under construction” vibe—so that anyone can help out with the code and content by making contributions to the github repo.

A Gov Supreme

I’ve been doing some workshopping and consultancy at a few different companies recently, mostly about responsive design. I can’t help but feel a little bad about it because, while I think they’re expecting to get a day of CSS, HTML, and JavaScript, what they actually get is the uncomfortable truth that responsive design changes everything …changes that start long before the front-end development phase.

I explain the ramifications of responsive design, hammer on about progressive enhancement like a broken record, extoll the virtues of a content-first approach, exhort them to read A Dao of Web Design, and let them know that, oh, by the way, your entire way of working will probably have to change.

Y’see, it’s my experience that the biggest challenges of responsive design (which, let’s face it, now means web design) are not technology problems. Sure, we’ve got some wicked problems when dealing with non-flexible media like bitmap images, which fight against the flexible nature of the web, but thanks to the work of some very smart and talented people, even those kinds of issues are manageable.

No, the biggest challenges, in my experience, are to do with people. Specifically, the way that people work together.

Old waterfallesque processes where visual designers work entirely in Photoshop before throwing PSDs over the wall to developers just don’t cut it any more. Old QA testing processes that demanded visual consistency across all browsers and platforms are just ludicrous.

The thing is …those old processes were never any good. We fooled ourselves into thinking they worked, but that was only because we were working from some unfounded assumption: that everyone is on broadband, that everyone has a nice big screen, that everyone has a certain level of JavaScript capability. The explosion in diversity of mobile devices (and with it, the rise of responsive design) has shone a light on those assumptions and exposed those old processes for the façades that they always were.

When I’m doing a workshop and I tell that to designers, developers, and project managers, they often respond by going through the five stages of grief. Denial, anger, bargaining, depression …I try to work with them through those reactions until they ultimately get to acceptance.

Somewhere between the “bargaining” and “depression” phase, somebody inevitably passes the buck further up the chain:

“Oh, we’d love to do what you’re saying, but our clients would never go for it.” Or “You’ve convinced me but there’s no way our boss will ever agree to this.”

I’ve got to be honest: sometimes I think we use “the client” and “the boss” as a crutch. I’m also somewhat bemused when people ask me for advice to help them convince their client or their boss. I don’t know your boss—how could I possibly offer any relevant advice?

Still, I’ve written about this question of “How do I convince…?” before:

Something I’ve found useful in the past is the ability to point at trailblazers and say “like that!” Selling the idea of web standards became a whole lot easier after Doug redesigned Wired and Mike redesigned ESPN. It’s a similar situation with responsive design: clients are a lot more receptive to the idea now that The Boston Globe site is live.

When it comes to responsive design, there’s one site that should thoroughly shame anyone who claims that they can’t convince their boss to do the right thing: GOV dot UK.

It’s responsive. It puts user needs first. It’s beautiful. It even won the Design Museum’s design of the year, for crying out loud.

This isn’t some flashy lifestyle business. This isn’t some plucky young disruptive startup. This is the British government, an organisation so stodgy and bureaucratic that there are multiple sitcoms about its stodginess and bureaucracy.

Gov.uk is an inspiration. If the slowest-moving organisation in the country can turn itself around, embrace a whole new way of working, and produce a beautiful, usable, responsive site, then the rest of us really have no excuse.

Building Matter

When I was preparing my Responsive Enhancement workshop for last year’s dConstruct, I thought I should create an example site to demonstrate the various techniques I would be talking about to demonstrate how responsive design could be combined with progressive enhancement to make something works great on any device.

Round about that time, while I was scratching my head trying to figure out what the fake example site should be, I got an email from Bobbie asking if I wanted to meet up for a coffee and a chat. We met up and he told me about a project he wanted to do with his colleague Jim Giles. They wanted to create a place for really good long-form journalism on science and technology.

“The thing is,” said Bobbie, “we want to make sure it’s readable on phones, on tablets, on Kindles, everything really. But we don’t know the best approach to take for that.”

“Well, Bobbie, it’s funny you should mention that,” I said. “I’m currently putting together a workshop all about responsive design, which sounds perfect for what you want to do. And I need to create an example site to showcase the ideas.”

It was a perfect match. Bobbie gave me his design principles, personas, and—most importantly—content. In return, he would get a prototype that would demonstrate how that content could be readable on any device; perfect for drumming up interest and investment.

The workshop went really well, and some great ideas came out of the brainstorming the attendees were doing.

A few months later, Bobbie and Jim put the project—now called Matter—up on Kickstarter. They met their target, and then some. Clearly there was a lot of interest in well-written original journalism on the web. Now they had to build it.

They got hold of Phil to do the backend so that was sorted but Bobbie asked me if I knew any kick-ass designers and front-end developers.

“Well, I would love to work on it,” I said. “So how about working with Clearleft?”

“I didn’t think you guys would be available,” he said. “I’d love to work with you!”

And so we began a very fun collaboration. Paul moved his desk next to mine and we started playing around with the visual design and front-end development. Phil and Bobbie came by and we hammered out design principles, user journeys, and all that fun stuff.

Finishing up a great day of project planning with Bobbie and Phil

It was really nice to work on a project where readability took centre stage. “Privilege the reading experience” was our motto.

Paul did some fantastic work, not just on creating a typographic system, but also creating a brand identity including what I think is a really great logo.

Wearing my @ReadMatter T-shirt

I started putting together a system of markup and CSS patterns, using the device lab to test them. Phil started implementing those patterns using Django. It all went very smoothly indeed.

Testing Placekittening

Today is launch day. Matter is live. If you backed the project on Kickstarter, you’ve got mail. If not, you can buy the first issue for a mere 99 cents.

The first piece is a doozy. It’s called Do No Harm:

Why do some people want to amputate a perfectly healthy limb? And why would any doctor help them?

If this is indicative of the kind of work that Matter will be publishing, it will definitely live up to its ambitious promise:

MATTER commissions, crafts and publishes unmissable journalism about science, technology and the ideas shaping our future.

How do I convince…?

When I was speaking at An Event Apart in Austin I gave a somewhat rambling presentation. As usual, I was hammering home the importance of progressive enhancement, a methodology that’s actually not that tricky once you accept that websites do not need to look exactly the same in every browser and neither do websites need to be experienced exactly the same in every browser.

I had some time after the talk to answer a question or two from the audience—something I always enjoy. One of the questions went something along the lines of “All of this sounds great, but how do I convince my boss…?”

I smiled.

I smiled because I had been having exactly this same conversation with Beth at the opening party the night before. Here’s what I told her (and what I repeated in answering the person who asked the question)…

I’ve been giving talks since around 2005. At first I talked about DOM Scripting, trying to convince people that JavaScript wasn’t evil (at a time when JavaScript was very unpopular). Later I spoke about Ajax and progressive enhancement, hoping to persuade people to use the technology in a responsible way. Sometimes I gave talks about microformats. Later I got excited by HTML5 and spoke about that. More recently I’ve been talking about the importance of taking a Content First approach to responsive design.

Almost every time I gave a talk—no matter what the subject matter—someone would inevitably say “Yes, but how do I convince my boss?” or “That’s all well and good but how do I convince my clients?”

In fact, one time when I was giving a talk at From The Front in Italy, I made an extra slide that I kept in reserve after the final “thank you” slide. It simply read “How do I convince…?” Sure enough, when I was taking questions from the audience, someone asked that very question (and I advanced my slide deck and looked like a mind-reader).

The reason I mention this recurring trend is that I find it reassuring. We’ve been here before. What each one of my previous experiences has shown me is that things do change. Change might seem slow at times, but there’s a big difference between slow and static.

I remember the days of the web standards campaign. Trying to convince developers to use CSS for presentation instead of tables seemed like a Sisyphean task. But we got there.

I felt like a lone voice in the wilderness crying out in favour of liquid layouts for years, but now—thanks to the rise of responsive design—change has finally come. As for responsive design itself, I was sure it was going to be another uphill struggle to convince people of the benefits—and I was all set to take a hardline approach—but I’ve been pleasantly surprised to see that it’s an idea whose time has come.

I’m not the only one who has noticed this cyclical trend of new technologies and methodologies being pessimistically dismissed. Eric recently said:

So take heart. All of this has happened before. All of this will happen again.

But what about answering the question? How do you convince clients/bosses to adopt a new technique or technology?

I’m afraid that this is the point at which I tend to throw up my hands and say, “Don’t ask me! That’s not my job—I just make websites.” But it’s a perfectly valid question and I think it would be good to have resources we could all point to when we need some ammunition.

Luke is exceptionally good at providing data to back up his arguments. I wrote on the back of his book:

Luke doesn’t just rely on his wondrous wit and marvellous writing style to make an overwhelmingly convincing argument for designing the mobile experience first; he also hammers home all of his points with oodles and oodles of scrumptious data.

That’s a good tactic. As he once said to me, “If you torture data for long enough, you can get it to confess to anything.”

Something I’ve found useful in the past is the ability to point at trailblazers and say “like that!” Selling the idea of web standards became a whole lot easier after Doug redesigned Wired and Mike redesigned ESPN. It’s a similar situation with responsive design: clients are a lot more receptive to the idea now that The Boston Globe site is live. But of course if you only ever follow the trailblazers, you’ll never get the opportunity to blaze a new trail yourself. Frustrating.

Another tactic that I’ve used in the past is to simply not ask for permission, but go ahead and use the new technologies and techniques anyway. That isn’t always practical but it’s worth a try. Rather than spending valuable time trying to convince your boss or client that they should let you do something, just do it (if you’ll pardon the Nike-ian platitude).

Andy likens the “How do I convince…?” conundrum to having a plumber come ‘round to fix your sink, only to ask you “Is it alright if I use this particular wrench?” You’re the plumber—you decide!

Except we’re not in the plumbing business (and we’re clearly not in the metaphor business either).

I do sometimes wonder whether we use the big bad client or the big bad boss as a crutch. “Oh, I’d love to try out this technique, but the client/boss would never go for it. Something something IE6.” Maybe we’re not giving them enough credit. Given the right argument, they might just listen to reason.

Secret src

There’s been quite a brouhaha over the past couple of days around the subject of standardising responsive images. There are two different matters here: the process and the technical details. I’d like to address both of them.

Ill communication

First of all, there’s a number of very smart developers who feel that they’ve been sidelined by the WHATWG. Tim has put together a timeline of what happened:

  1. Developers got involved in trying to standardize a solution to a common and important problem.
  2. The WHATWG told them to move the discussion to a community group.
  3. The discussion was moved (back in February), a general consenus (not unanimous, but a majority) was reached about the picture element.
  4. Another (partial) solution was proposed directly on the WHATWG list by an Apple employee.
  5. A discussion ensued regarding the two methods, where they overlapped, and how the general opinions of each. The majority of developers favored the picture element and the majority of implementors favored the srcset attribute.
  6. While the discussion was still taking place, and only 5 days after it was originally proposed, the srcset attribute (but not the picture element) was added to the draft.

A few points in that timeline have since been clarified. That second step—“The WHATWG told them to move the discussion to a community group”—turns out to be untrue. Some random person on the WHATWG mailing list (which is open to everyone) suggested forming a Community Group at the W3C. Alas, nobody else on the WHATWG mailing list corrected that suggestion.

Then there’s apparent causality between step 4 and 6. Initially, I also assumed that this was what happened: that Ted had proposed the srcset solution without even being aware of the picture solution that the Community Group had independently come up with it. It turns out that’s not the case. Ted had another email about the picture proposal but he never ended up sending it. In fact, his email about srcset had been sitting in draft for quite a while and he only sent it out when he saw that Hixie was finally collating feedback on responsive images.

So from the outside it looked like there was preferential treatment being given to Ted’s proposal because it came from within the WHATWG. That’s not the case, but it must be said: the fact that srcset was so quickly added to the spec (albeit in a different form) doesn’t look good. It’s easy to understand why the smart folks in the Responsive Images Community Group felt miffed.

But let’s be clear: this is exactly how the WHATWG is supposed to work. Use-cases are evaluated and whatever Hixie thinks is best solution gets put in the spec, regardless of how popular or unpopular it is.

Now, if that sounds abhorrent to you, I completely understand. A dictatorship should cause us to recoil.

That’s where the W3C come in. Their model is completely different. Everything is done by committee there.

Steve Faulkner chimed in on Tim’s post with his take on the two groups:

It seems like the development of HTML has turned full circle, the WHATWG was formed to overthrow the hegemony of the W3C, now the W3C acts as a counter to the hegemony of the WHATWG.

I think he’s right. The W3C keeps the rapid, sometimes anarchic approach of the WHATWG in check. But the opposite is also true. Without the impetus provided by the WHATWG, I’m not sure that the W3C HTML Working Group would ever get anything done. There’s a balance that actually works quite well in practice.

Back to the situation with responsive images…

Unfortunately, it appears to people within the Responsive Images Community Group that all their effort was wasted because their proposed solution was summarily rejected. In actuality all the use-cases they gathered were immensely valuable. But it’s certainly true that the WHATWG didn’t make it clearer how and where developers could best contribute.

Community Groups are a W3C creation. They don’t have anything to do with the WHATWG, who do all their work on their own mailing list, their own wiki and their own IRC channel.

I do think that the W3C Community Groups offer a good place to go bike-shedding on problems. That’s a term that’s usually used derisively but sometimes it’s good to have a good ol’ bike-shedding without clogging up the mailing list for everyone. But it needs to be clear that there’s a big difference between a Community Group and a Working Group.

I wish the WHATWG had done a better job of communicating to newcomers how best to contribute. It would have avoided a lot of the frustrations articulated by Wilto:

Unfortunately, we were laboring under the impression that Community Groups shared a deeper inherent connection with the standards bodies than it actually does.

But in any case, as Doctor Bruce writes at least now there’s a proposed solution for responsive images in HTML: The Living Standard:

I don’t really care which syntax makes the spec, as long as it addresses the majority of use cases and it is usable by authors. I’m just glad we’re discussing the adaptive image problem at all.

So let’s take a look at the technical details.

src code

The Responsive Images Community Group came up with a proposal based off the idea of minting a new element, called say picture, that mimics the behaviour of video

<picture alt="image description">
  <source src="/path/to/image.png" media="(min-width: 600px)">
  <source src="/path/to/otherimage.png" media="(min-width: 800px)">
  <img src="/path/to/image.png" alt="image description">
</picture>

One of the reasons why a new element was chosen rather than extending the existing img element was due to a misunderstanding. The WHATWG had explained that the parsing of img couldn’t be easily altered. That means that img must remain a self-closing element—any solution that requires a closing /img tag wouldn’t work. Alas, that was taken to mean that extending the img element in any way was off the cards.

The picture proposal has a number of things going for it. Its syntax is easily understandable for authors: if you know media queries, then you know how to use picture. It also has a good fallback for older browsers : a regular img element. This fallback mechanism (and the idea of multiple source elements with media queries) is exactly how the video element is specced.

Unfortunately using media queries on the sources of videos has proven to be very tricky for implementors, so they don’t want to see that pattern repeated.

Another issue with multiple source elements is that parsers must wait until the closing /picture tag before they can even begin to evaluate which image to show. That’s not good for performance.

So the alternate solution, based on Ted’s proposal, extends the img element using a new srcset attribute that takes a comma-separated list of values:

<img alt="image description"
src="/path/to/fallbackimage.png"
srcset="/path/to/image.png 800w, /path/to/otherimage.png 600w">

Not nearly as pretty, I think you’ll agree. But it is actually nice and compact for the “retina display” use-case:

<img alt="image description" src="/path/to/image.png" srcset="/path/to/otherimage.png 2x">

Just to be clear, that does not mean that otherimage.png is twice the size of image.png (though it could be). What you’re actually declaring is “Use image.png unless the device supports double-pixel density, in which case, use otherimage.png.”

Likewise, when I declare:

srcset="/path/to/image.png 600w 400h"

…it does not mean that image.png is 600 pixels wide by 400 pixels tall. Instead, it means that an action should be taken if the viewport matches those dimensions.

It took me a while to wrap my head around that distinction: I’m used to attributes describing the element they’re attached to, not the viewport.

Now for the really tricky bit: what do those numbers—600w and 400h—mean? Currently the spec is giving conflicting information.

Each image that’s listed in the srcset comma-separated list can have up to three values associated with it: w, h, and x. The x is pretty clear: that’s the pixel density of the device. The w and h values refer to the width and height of the viewport …but it’s not clear if they mean min-width/height or max-width/height.

If I’m taking a “Mobile First” approach to development, then srcset will meet my needs if w and h refer to min-width and min-height.

In this example, I’ll just use w to keep things simple:

<img src="small.png" srcset="medium.png 600w, large.png 800w">

(Expected behaviour: use small.png unless the viewport is wider than 600 pixels, in which case use medium.png unless the viewport is wider than 800 pixels, in which case use large.png).

If, on the other hand, w and h refer to max-width and max-height, I have to take a “Desktop First” approach:

<img src="large.png" srcset="medium.png 800w, small.png 600w">

(Expected behaviour: use large.png unless the viewport is narrower than 800 pixels, in which case use medium.png unless the viewport is narrower than 600 pixels, in which case use small.png).

One of the advantages of media queries is that, because they support both min- and max- width, they can be used in either use-case: “Mobile First” or “Desktop First”.

Because the srcset syntax will support either min- or max- width (but not both), it will therefore favour one case at the expense of the either.

Both use-cases are valid. Personally, I happen to use the “Mobile First” approach, but that doesn’t mean that other developers shouldn’t be able to take a “Desktop First” approach if they want. By the same logic, I don’t much like the idea of srcset forcing me to take a “Desktop First” approach.

My only alternative, if I want to take a “Mobile First” approach, is to duplicate image paths and declare ludicrous breakpoints:

<img src="small.png" srcset="small.png 600w, medium.png 800w, large.png 99999w">

I hope that this part of the spec offers a way out:

for the purposes of this requirement, omitted width descriptors and height descriptors are considered to have the value “Infinity”

I think that means I should be able to write this:

<img src="small.png" srcset="small.png 600w, medium.png 800w, large.png">

It’s all quite confusing and srcset doesn’t have anything approaching the extensibility of media queries, but I hope we can get it to work somehow.