Tags: ui



Vertical limit

When I was first styling Resilient Web Design, I made heavy use of vh units. The vertical spacing between elements—headings, paragraphs, images—was all proportional to the overall viewport height. It looked great!

Then I tested it on real devices.

Here’s the problem: when a page loads up in a mobile browser—like, say, Chrome on an Android device—the URL bar is at the top of the screen. The height of that piece of the browser interface isn’t taken into account for the viewport height. That makes sense: the viewport height is the amount of screen real estate available for the content. The content doesn’t extend into the URL bar, therefore the height of the URL bar shouldn’t be part of the viewport height.

But then if you start scrolling down, the URL bar scrolls away off the top of the screen. So now it’s behaving as though it is part of the content rather than part of the browser interface. At this point, the value of the viewport height changes: now it’s the previous value plus the height of the URL bar that was previously there but which has now disappeared.

I totally understand why the URL bar is squirrelled away once the user starts scrolling—it frees up some valuable vertical space. But because that necessarily means recalculating the viewport height, it effectively makes the vh unit in CSS very, very limited in scope.

In my initial implementation of Resilient Web Design, the one where I was styling almost everything with vh, the site was unusable. Every time you started scrolling, things would jump around. I had to go back to the drawing board and remove almost all instances of vh from the styles.

I’ve left it in for one use case and I think it’s the most common use of vh: making an element take up exactly the height of the viewport. The front page of the web book uses min-height: 100vh for the title.


But as soon as you scroll down from there, that element changes height. The content below it suddenly moves.

Let’s say the overall height of the browser window is 600 pixels, of which 50 pixels are taken up by the URL bar. When the page loads, 100vh is 550 pixels. But as soon as you scroll down and the URL bar floats away, the value of 100vh becomes 600 pixels.

(This also causes problems if you’re using vertical media queries. If you choose the wrong vertical breakpoint, then the media query won’t kick in when the page loads but will kick in once the user starts scrolling …or vice-versa.)

There’s a mixed message here. On the one hand, the browser is declaring that the URL bar is part of its interface; that the space is off-limits for content. But then, once scrolling starts, that is invalidated. Now the URL bar is behaving as though it is part of the content scrolling off the top of the viewport.

The result of this messiness is that the vh unit is practically useless for real-world situations with real-world devices. It works great for desktop browsers if you’re grabbing the browser window and resizing, but that’s not exactly a common scenario for anyone other than web developers.

I’m sure there’s a way of solving it with JavaScript but that feels like using an atomic bomb to crack a walnut—the whole point of having this in CSS is that we don’t need to use JavaScript for something related to styling.

It’s such a shame. A piece of CSS that’s great in theory, and is really well supported, just falls apart where it matters most.

Update: There’s a two-year old bug report on this for Chrome, and it looks like it might actually get fixed in February.

Fractal ways

24 Ways is back! That’s how we web nerds know that the Christmas season is here. It kicked off this year with a most excellent bit of hardware hacking from Seb: Internet of Stranger Things.

The site is looking lovely as always. There’s also a component library to to accompany it: Bits, the front-end component library for 24 ways. Nice work, courtesy of Paul. (I particularly like the comment component example).

The component library is built with Fractal, the magnificent tool that Mark has open-sourced. We’ve been using at Clearleft for a while now, but we haven’t had a chance to make any of the component libraries public so it’s really great to be able to point to the 24 Ways example. The code is all on Github too.

There’s a really good buzz around Fractal right now. Lots of people in the design systems Slack channel are talking about it. There’s also a dedicated Fractal Slack channel for people getting into the nitty-gritty of using the tool.

If you’re currently wrestling with the challenges of putting a front-end component library together, be sure to give Fractal a whirl.

The imitation game

Jason shared some thoughts on designing progressive web apps. One of the things he’s pondering is how much you should try make your web-based offering look and feel like a native app.

This was prompted by an article by Owen Campbell-Moore over on Ev’s blog called Designing Great UIs for Progressive Web Apps. He begins with this advice:

Start by forgetting everything you know about conventional web design, and instead imagine you’re actually designing a native app.

This makes me squirm. I mean, I’m all for borrowing good ideas from other media—native apps, TV, print—but I don’t think that inspiration should mean imitation. For me, that always results in an interface that sits in a kind of uncanny valley of being almost—but not quite—like the thing it’s imitating.

With that out of the way, most of the recommendations in Owen’s article are sensible ideas about animation, input, and feedback. But then there’s recommendation number eight: Provide an easy way to share content:

PWAs are often shown in a context where the current URL isn’t easily accessible, so it is important to ensure the user can easily share what they’re currently looking at. Implement a share button that allows users to copy the URL to the clipboard, or share it with popular social networks.

See, when a developer has to implement a feature that the browser should be providing, that seems like a bad code smell to me. This is a problem that Opera is solving (and Google says it is solving, while meanwhile penalising developers who expose the URL to end users).

Anyway, I think my squeamishness about all the advice to imitate native apps is because it feels like a cargo cult. There seems to be an inherent assumption that native is intrinsically “better” than the web, and that the only way that the web can “win” is to match native apps note for note. But that misses out on all the things that only the web can do—instant distribution, low-friction sharing, and the ability to link to any other resource on the web (and be linked to in turn). Turning our beautifully-networked nodes into standalone silos just because that’s the way that native apps have to work feels like the cure that kills the patient.

If anything, my advice for building a progressive web app would be the exact opposite of Owen’s: don’t forget everything you’ve learned about web design. In my opinion, the term “progressive web app” can be read in order of priority:

  1. Progressive—build in a layered way so that anyone can access your content, regardless of what device or browser they’re using, rewarding the more capable browsers with more features.
  2. Web—you’re building for the web. Don’t lose sight of that. URLs matter. Accessibility matters. Performance matters.
  3. App—sure, borrow what works from native apps if it makes sense for your situation.

Jason asks questions about how your progressive web app will behave when it’s added to the home screen. How much do you match the platform? How do you manage going chromeless? And the big one: what do users expect?

Will people expect an experience that maps to native conventions? Or will they be more accepting of deviation because they came to the app via the web and have already seen it before installing it?

These are good questions and I share Jason’s hunch:

My gut says that we can build great experiences without having to make it feel exactly like an iOS or Android app because people will have already experienced the Progressive Web App multiple times in the browser before they are asked to install it.

In all the messaging from Google about progressive web apps, there’s a real feeling that the ability to install to—and launch from—the home screen is a real game changer. I’m not so sure that we should be betting the farm on that feature (the offline possibilities opened up by service workers feel like more of a game-changer to me).

People have been gleefully passing around the statistic that the average number of native apps installed per month is zero. So how exactly will we measure the success of progressive web apps against native apps …when the average number of progressive web apps installed per month is zero?

I like Android’s add-to-home-screen algorithm (although it needs tweaking). It’s a really nice carrot to reward the best websites with. But let’s not carried away. I think that most people are not going to click that “add to home screen” prompt. Let’s face it, we’ve trained people to ignore prompts like that. When someone is trying to find some information or complete a task, a prompt that pops up saying “sign up to our newsletter” or “download our native app” or “add to home screen” is a distraction to be dismissed. The fact that only the third example is initiated by the operating system, rather than the website, is irrelevant to the person using the website.

Getting the “add to home screen” prompt for https://huffduffer.com/ on Android Chrome.

My hunch is that the majority of people will still interact with your progressive web app via a regular web browser view. If, then, only a minority of people are going to experience your site launched from the home screen in a native-like way, I don’t think it makes sense to prioritise that use case.

The great thing about progressive web apps is that they are first and foremost websites. Literally everyone who interacts with your progressive web app is first going to do so the old-fashioned way, by following a link or typing in a URL. They may later add it to their home screen, but that’s just a bonus. I think it’s important to build progressive web apps accordingly—don’t pretend that it’s just like building a native app just because some people will be visiting via the home screen.

I’m worried that developers are going to think that progressive web apps are something that need to built from scratch; that you have to start with a blank slate and build something new in a completely new way. Now, there are some good examples of these kind of one-off progressive web apps—The Guardian’s RioRun is nicely done. But I don’t think that the majority of progressive web apps should fall into that category. There’s nothing to stop you taking an existing website and transforming it step-by-step into a progressive web app:

  1. Switch over to HTTPS if you aren’t already.
  2. Use a service worker, even if it’s just to provide a custom offline page and cache some static assets.
  3. Make a manifest file to point to an icon and specify some colours.

See? Not exactly a paradigm shift in how you approach building for the web …but those deceptively straightforward steps will really turbo-boost your site.

I’m really excited about progressive web apps …but mostly for the “progressive” and “web” parts. Maybe I’ll start calling them progressive web sites. Or progressive web thangs.

Sticky headers

I made a little tweak to The Session today. The navigation bar across the top is “sticky” now—it doesn’t scroll with the rest of the content.

I made sure that the stickiness only kicks in if the screen is both wide and tall enough to warrant it. Vertical media queries are your friend!

But it’s not enough to just put some position: fixed CSS inside a media query. There are some knock-on effects that I needed to mitigate.

I use the space bar to paginate through long pages. It drives me nuts when sites with sticky headers don’t accommodate this. I made use of Tim Murtaugh’s sticky pagination fixer. It makes sure that page-jumping with the keyboard (using the space bar or page down) still works. I remember when I linked to this script two years ago, thinking “I bet this will come in handy one day.” Past me was right!

The other “gotcha!” with having a sticky header is making sure that in-page anchors still work. Nicolas Gallagher covers the options for this in a post called Jump links and viewport positioning. Here’s the CSS I ended up using:

:target:before {
    content: '';
    display: block;
    height: 3em;
    margin: -3em 0 0;

I also needed to check any of my existing JavaScript to see if I was using scrollTo anywhere, and adjust the calculations to account for the newly-sticky header.

Anyway, just a few things to consider if you’re going to make a navigational element “sticky”:

  1. Use min-height in your media query,
  2. Take care of keyboard-initiated page scrolling,
  3. Adjust the positioning of in-page links.

A little progress

I’ve got a fairly simple posting interface for my notes. A small textarea, an optional file upload, some checkboxes for syndicating to Twitter and Flickr, and a submit button.

Notes posting interface

It works fine although sometimes the experience of uploading a file isn’t great, especially if I’m on a slow connection out and about. I’ve been meaning to add some kind of Ajax-y progress type thingy for the file upload, but never quite got around to it. To be honest, I thought it would be a pain.

But then, in his excellent State Of The Gap hit parade of web technologies, Remy included a simple file upload demo. Turns out that all the goodies that have been added to XMLHttpRequest have made this kind of thing pretty easy (and I’m guessing it’ll be easier still once we have fetch).

I’ve made a little script that adds a progress bar to any forms that are POSTing data.

Feel free to use it, adapt it, and improve it. It isn’t using any ES6iness so there are some obvious candidates for improvement there.

It’s working a treat on my little posting interface. Now I can stare at a slowly-growing progress bar when I’m out and about on a slow connection.

Conversational interfaces

Psst… Jeremy! Right now you’re getting notified every time something is posted to Slack. That’s great at first, but now that activity is increasing you’ll probably prefer dialing that down.

Slackbot, 2015

What’s happening?

Twitter, 2009

Why does everyone always look at me? I know I’m a chalkboard and that’s my job, I just wish people would ask before staring at me. Sometimes I don’t have anything to say.

Existentialist chalkboard, 2007

I’m Little MOO - the bit of software that will be managing your order with us. It will shortly be sent to Big MOO, our print machine who will print it for you in the next few days. I’ll let you know when it’s done and on it’s way to you.

Little MOO, 2006

It looks like you’re writing a letter.

Clippy, 1997

Your quest is to find the Warlock’s treasure, hidden deep within a dungeon populated with a multitude of terrifying monsters. You will need courage, determination and a fair amount of luck if you are to survive all the traps and battles, and reach your goal — the innermost chambers of the Warlock’s domain.

The Warlock Of Firetop Mountain, 1982

Welcome to Adventure!! Would you like instructions?

Colossal Cave, 1976

I am a lead pencil—the ordinary wooden pencil familiar to all boys and girls and adults who can read and write.

I, Pencil, 1958

Ælfred ordered me to be made

Ashmolean Museum, Oxford

The Ælfred Jewel, ~880

Technical note

I have marked up the protagonist of each conversation using the cite element. There is a long-running dispute over the use of this element. In HTML 4.01 it was perfectly fine to use cite to mark up a person being quoted. In the HTML Living Standard, usage has been narrowed:

The cite element represents the title of a work (e.g. a book, a paper, an essay, a poem, a score, a song, a script, a film, a TV show, a game, a sculpture, a painting, a theatre production, a play, an opera, a musical, an exhibition, a legal case report, a computer program, etc). This can be a work that is being quoted or referenced in detail (i.e. a citation), or it can just be a work that is mentioned in passing.

A person’s name is not the title of a work — even if people call that person a piece of work — and the element must therefore not be used to mark up people’s names.

I disagree.

In the examples above, it’s pretty clear that I, Pencil and Warlock Of Firetop Mountain are valid use cases for the cite element according to the HTML5 definition; they are titles of works. But what about Clippy or Little Moo or Slackbot? They’re not people …but they’re not exactly titles of works either.

If I were to mark up a dialogue between Eliza and a human being, should I only mark up Eliza’s remarks with cite? In text transcripts of conversations with Alexa, Siri, or Cortana, should only their side of the conversation get attributed as a source? Or should they also be written without the cite element because it must not be used to mark up people’s names …even though they are not people, according to conventional definition.

It’s downright botist.

Accessible progressive disclosure revisited

I wrote a little while back about making an accessible progressive disclosure pattern. It’s very basic—just a few ARIA properties and a bit of JavaScript sprinkled onto some basic HTML. The HTML contains a button element that toggles the aria-hidden property on a chunk of markup.

Earlier this week I had a chance to hang out with accessibility experts Derek Featherstone and Devon Persing so I took the opportunity to pepper them with questions about this pattern. My main question was “Should I automatically focus the toggled content?”

Derek’s response was very perceptive. He wanted to know why I was using a button. Good question. When you think about it, what I’m doing is pointing from one element to another. On the web, we point with links.

There are no hard’n’fast rules about this kind of thing, but as Derek put it, it helps to think about whether the action involves controlling something (use a button) or taking the user somewhere (use a link). At first glance, the progressive disclosure pattern seems to be about controlling something—toggling the appearance of another element. But if I’m questioning whether to automatically focus that element, then really I’m asking whether I want to take the user to that place in the document—in other words, linking to it.

I decided to update the markup. Here’s what I had before:

<button aria-controls="content">Reveal</button>
<div id="content"></div>

Here’s what I have now:

<a href="#content" aria-controls="content">Reveal</a>
<div id="content"></div>

The logic in the JavaScript remains exactly the same:

  1. Find any elements that have an aria-controls attribute (these were buttons, now they’re links).
  2. Grab the value of that aria-controls attribute (an ID).
  3. Hide the element with that ID by applying aria-hidden="true" and make that element focusable by adding tabindex="-1".
  4. Set aria-expanded="false" on the associated link (this attribute can be a bit confusing—it doesn’t mean that this element is not expanded; it means the element it controls is not expanded).
  5. Listen for click events on those links.
  6. Toggle the aria-hidden and aria-expanded when there’s a click event.
  7. When aria-hidden is set to false on an element (thereby revealing it), focus that element.

You can see it in action on CodePen.

See the Pen Accessible toggle (link) by Jeremy Keith (@adactio) on CodePen.

Accessible progressive disclosure

Over on The Session I have a few instances of a progressive disclosure pattern. It’s just your basic show/hide toggle: click on a button; see some more content. For example, there’s a “download” button for every tune that displays options to download the tune in different formats (ABC and midi).

To begin with, I was using the :checked pseudo-class pattern that Charlotte has documented so well. I really like that pattern. It feels nice and straightforward. But then I got some feedback from someone using the site:

the link for midi files is no longer coming up on the tune pages. I am blind so I rely on the midi’s when finding tunes for my students.

I wrote back saying the link to download midi files was revealed by the “download” option. The response:

Excellent. I have it now, I was just looking for the midi button which wasn’t there. the actual download button doesn’t read as a button under each version of the tune but now I know it’s there I know what I am doing. I am using the JAWS screen reader.

This was just one person …one person who took the time to write to me. What about other screen reader users?

I dabbled around with adding role="button" to the checkbox or the label, but that felt really icky (contradicting the inherent role of those elements) and it didn’t seem to make much difference anyway.

I decided to refactor the progressive disclosure to use JavaScript instead just CSS. I wanted to make sure that accessibility was built into the functionality, rather than just bolted on. That’s why code I’ve written doesn’t rely on the buttons having a particular class value; instead the buttons must have an aria-controls attribute that associates the button with the element it toggles (in much the same way that a for attribute associates a label with a form field).

Here’s the logic:

  1. Find any elements that have an aria-controls attribute (these should be buttons).
  2. Grab the value of that aria-controls attribute (an ID).
  3. Hide the element with that ID by applying aria-hidden="true" and make that element focusable by adding tabindex="-1".
  4. Set aria-expanded="false" on the associated button (this attribute can be a bit confusing—it doesn’t mean that this element is not expanded; it means the element it controls is not expanded).
  5. Listen for click events on those buttons.
  6. Toggle the aria-hidden and aria-expanded when there’s a click event.
  7. When aria-hidden is set to false on an element (thereby revealing it), focus that element.

You can see it action on CodePen.

I’m still playing around with this. I think the :focus styles are probably far too subtle right now—see this excellent presentation from Laura Palmaro for more on that. I’m also not sure if the revealed content should automatically take focus. I’ll see if I can get some feedback from people on The Session using screen readers—there’s quite a few of them.

Feel free to use my code but you might want to check out Jason’s code to do the same thing—his is bound to be nicer to work with.

Update: In response to this discussion, I’ve decided not to automatically focus the expanded content.

Words of welcome

For a while now, The Session has had some little on-boarding touches to make sure that new members are eased into the culture of this traditional Irish music community.

First off, new members are encouraged to add a little bit about themselves so that there’s some context when they start making contributions.

Welcome! You are now a member of The Session. Now, how about sharing a bit more about yourself: where you're from, what instrument(s) you play, etc.

Secondly, new members can’t kick off a brand new discussion straight away.

Woah there! I appreciate your eagerness to post your first discussion, but seeing as you just joined The Session, maybe it would be better if you wait a little bit first. Take a look around at the existing discussions, have a read of the house rules and get a feel for how things work around here.

Likewise, they can’t post a comment straight away. They need to wait an hour between signing up and posting their first comment. Instead of seeing a comment form, they see a countdown.

Welcome to The Session, Testy McTest! You'll be able to add your first comment in forty-seven minutes.

Finally, when they do make their first submission—whether it’s a discussion, an event, a session, or a tune—the interface displays a few extra messages of encouragement and care.

Add a tune, Step 1 of 4: Tune Details. As this is your first tune submission, please take extra care. First, provide some basic details about the tune you want to add.

But I realised that all of these custom messages were very one-sided. They were always displayed to the new member. It’s equally important that existing members treat any newcomers with respect.

Now on some discussions, an extra message is displayed to existing members right before the comment form. The logic is straightforward:

  1. If this is a discussion added by a new member,
  2. who hasn’t yet added any comments anywhere,
  3. and this discussion has no responses so far,
  4. and anyone other than that member is viewing the page,
  5. then display a message asking for help making this new member feel welcome.

This is the first ever post by FourCourseChaos. Please help in making them feel welcome here at The Session.

It’s a small addition, but it makes a difference.

No intricate JavaScript; no smooth animations; just some words on a screen encouraging a human connection.

Hamburger, hamburger, hamburger

Andy’s been playing Devils Advocate again, defending the much-maligned hamburger button. Weirdly though, I think I’ve seen more blog posts, tweets, and presentations defending this supposed underdog than I’ve seen knocking it.

Take this presentation from Smashing Conference. It begins with a stirring call to arms. Designers of the web—cast off your old ways, dismiss your clichés, try new things, and discard lazy solutions! “Yes!”, I thought to myself, “this is a fantastic message.” But then the second half of the talk switches into a defence of the laziest, most clichéd, least thought-through old tropes of interface designs: carousels, parallax scrolling and inevitably, the hamburger icon.

But let’s not get into a binary argument of “good” vs. “bad” when it comes to using the hamburger icon. I think the question is more subtle than that. There are three issues that need to be addressed if we’re going to evaluate the effectiveness of using the hamburger icon:

  1. representation,
  2. usage, and
  3. clarity.


An icon is a gateway to either some content or a specific action. The icon should provide a clear representation of the content or action that it leads to. Sometimes “clear” doesn’t have to literally mean that it’s representative: we use icons all the time that don’t actually represent the associated content or action (a 3.5 inch diskette for “save”, a house for the home page of a website, etc.). Cultural factors play a large part here. Unless the icon is a very literal pictorial representation, it’s unlikely that any icon can be considered truly universal.

If a hamburger icon is used as the gateway to a list of items, then it’s fairly representative. It’s a bit more abstract than an actual list of menu items stacked one on top of the other, but if you squint just right, you can see how “three stacked horizontal lines” could represent “a number of stacked menu items.”

If, on the other hand, a hamburger icon is used as the gateway to, say, a grid of options, then it isn’t representative at all. A miniaturised grid—looking like a window—would be a more representative option.

So in trying to answer the question “Does the hamburger icon succeed at being representative?”, the answer—as ever—is “it depends.” If it’s used as a scaled-down version of the thing its representing, it works. If it’s used as a catch-all icon to represent “a bunch of stuff” (as is all too common these days), then it works less well.

Which brings us to…


Much of the criticism of the hamburger icon isn’t actually about the icon itself, it’s about how it’s used. Too many designers are using it as an opportunity to de-clutter their interface by putting everything behind the icon. This succeeds in de-cluttering the interface in the same way that a child putting all their messy crap in the cupboard succeeds in cleaning their room.

It’s a tricky situation though. On small screens especially, there just isn’t room to display all possible actions. But the solution is not to display none instead. The solution is to prioritise. Which actions need to be visible? Which actions can afford to be squirrelled away behind an icon? A designer is supposed to answer those questions (using research, testing, good taste, experience, or whatever other tools are at their disposal).

All too often, the hamburger icon is used as an excuse to shirk that work. It’s treated as a “get out of jail free” card for designing small-screen interfaces.

To be clear: this usage—or misusage—has nothing to do with the actual icon itself. The fact that the icon is three stacked lines is fairly irrelevant on this point. The reason why the three stacked lines are so often used is that there’s a belief that this icon will be commonly understood.

That brings us to last and most important point:


By far the most important factor in whether an icon—any icon—will be understood is whether or not it is labelled. A hamburger icon labelled with a word like “menu” or “more” or “options” is going to be far more effective than an unlabelled icon.

Don’t believe me? Good! Do some testing.

In my experience, 80-90% of the benefit of usability testing is in the area of labelling. And one of the lowest hanging fruit is the realisation that “Oh yeah, we should probably label that icon that we assumed would be universally understood.”

Andy mentions the “play” and “pause” symbols as an example of icons that are so well understood that they can stand by themselves. That’s not necessarily true.

I think there are two good rules of thumb when it comes to using icons:

  1. If in doubt, label it.
  2. If not in doubt, you probably should be—test your assumptions.


Now that we’ve established the three criteria for evaluating an icon’s effectiveness, let’s see how the hamburger icon stacks up (if you’ll pardon the pun):

  1. Representation: It depends. Is it representing a stacked list of menu items? If so, good. If not, reconsider.
  2. Usage: it depends. Is it being used as an excuse to throw literally all your navigation behind it? If so, reconsider. Prioritise. Decide what needs to be visible, and what can be tucked away.
  3. Clarity: it depends. Is the icon labelled? If so, good. If not, less good.

So there you go. The answer to the question “Is the hamburger icon good or bad?” is a resounding and clear “It depends.”

100 words 066

Today, as part of a crack Clearleft team, I travelled to Leamington Spa. That’s Royal Leamington Spa to you.

This seems like a perfectly pleasant town. Fortunately for us, our visit coincides with a pub quiz down at the local hipster bar—the one serving Mexican food with a cajun twist. Naturally we joined in the quizzing fun.

We thought we were being sensible by jokering the “science and nature” round, but it turned out we should’ve jokered “puppets and dummies” or “musicians in the movies”—a clean sweep! Who could’ve foretold that Andy Budd’s favourite film, Freejack, would feature?

A question of timing

I’ve been updating my collection of design principles lately, adding in some more examples from Android and Windows. Coincidentally, Vasilis unveiled a neat little page that grabs one list of principles at random —just keep refreshing to see more.

I also added this list of seven principles of rich web applications to the collection, although they feel a bit more like engineering principles than design principles per se. That said, they’re really, really good. Every single one is rooted in performance and the user’s experience, not developer convenience.

Don’t get me wrong: developer convenience is very, very important. Nobody wants to feel like they’re doing unnecessary work. But I feel very strongly that the needs of the end user should trump the needs of the developer in almost all instances (you may feel differently and that’s absolutely fine; we’ll agree to differ).

That push and pull between developer convenience and user experience is, I think, most evident in the first principle: server-rendered pages are not optional. Now before you jump to conclusions, the author is not saying that you should never do client-side rendering, but instead points out the very important performance benefits of having the server render the initial page. After that—if the user’s browser cuts the mustard—you can use client-side rendering exclusively.

The issue with that hybrid approach—as I’ve discussed before—is that it’s hard. Isomorphic JavaScript (terrible name) can theoretically help here, but I haven’t seen too many examples of it in action. I suspect that’s because this approach doesn’t yet offer enough developer convenience.

Anyway, I found myself nodding along enthusiastically with that first of seven design principles. Then I got to the second one: act immediately on user input. That sounds eminently sensible, and it’s backed up with sound reasoning. But it finishes with:

Techniques like PJAX or TurboLinks unfortunately largely miss out on the opportunities described in this section.

Ah. See, I’m a big fan of PJAX. It’s essentially the same thing as the Hijax technique I talked about many years ago in Bulletproof Ajax, but with the new addition of HTML5’s History API. It’s a quick’n’dirty way of giving the illusion of a fat client: all the work is actually being done in the server, which sends back chunks of HTML that update the interface. But it’s true that, because of that round-trip to the server, there’s a bit of a delay and so you often end up briefly displaying a loading indicator.

I contend that spinners or “loading indicators” should become a rarity

I agree …but I also like using PJAX/Hijax. Now how do I reconcile what’s best for the user experience with what’s best for my own developer convenience?

I’ve come up with a compromise, and you can see it in action on The Session. There are multiple examples of PJAX in action on that site, like pretty much any page that returns paginated results: new tune settings, the latest events, and so on. The steps for initiating an Ajax request used to be:

  1. Listen for any clicks on the page,
  2. If a “previous” or “next” button is clicked, then:
  3. Display a loading indicator,
  4. Request the new data from the server, and
  5. Update the page with the new data.

In one sense, I am acting immediately to user input, because I always display the loading indicator straight away. But because the loading indicator always appears, no matter how fast or slow the server responds, it sometimes only appears very briefly—just for a flash. In that situation, I wonder if it’s serving any purpose. It might even be doing the opposite to its intended purpose—it draws attention to the fact that there’s a round-trip to the server.

“What if”, I asked myself, “I only showed the loading indicator if the server is taking too long to send a response back?”

The updated flow now looks like this:

  1. Listen for any clicks on the page,
  2. If a “previous” or “next” button is clicked, then:
  3. Start a timer, and
  4. Request the new data from the server.
  5. If the timer reaches an upper limit, show a loading indicator.
  6. When the server sends a response, cancel the timer and
  7. Update the page with the new data.

Even though there are more steps, there’s actually less happening from the user’s perspective. Where previously you would experience this:

  1. I click on a button,
  2. I briefly see a loading indicator,
  3. I see the new data.

Now your experience is:

  1. I click on a button,
  2. I see the new data.

…unless the server or the network is taking too long, in which case the loading indicator appears as an interim step.

The question is: how long is too long? How long do I wait before showing the loading indicator?

The Nielsen Norman group offers this bit of research:

0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.

So I should set my timer to 100 milliseconds. In practice, I found that I can set it to as high as 200 to 250 milliseconds and keep it feeling very close to instantaneous. Anything over that, though, and it’s probably best to display a loading indicator: otherwise the interface starts to feel a little sluggish, and slightly uncanny. (“Did that click do any—? Oh, it did.”)

You can test the response time by looking at some of the simpler pagination examples on The Session: new recordings or new discussions, for example. To see examples of when the server takes a bit longer to send a response, you can try paginating through search results. These take longer because, frankly, I’m not very good at optimising some of those search queries.

There you have it: an interface that—under optimal conditions—reacts to user input instantaneously, but falls back to displaying a loading indicator when conditions are less than ideal. The result is something that feels like a client-side web thang, even though the actual complexity is on the server.

Now to see what else I can learn from the rest of those design principles.

The Session trad tune machine

Most pundits call it “the Internet of Things” but there’s another phrase from Andy Huntington that I first heard from Russell Davies: “the Geocities of Things.” I like that.

I’ve never had much exposure to this world of hacking electronics. I remember getting excited about the possibilities at a Brighton BarCamp back in 2008:

I now have my own little arduino kit, a bread board and a lucky bag of LEDs. Alas, know next to nothing about basic electronics so I’m really going to have to brush up on this stuff.

I never did do any brushing up. But that all changed last week.

Seb is doing a new two-day workshop. He doesn’t call it Internet Of Things. He doesn’t call it Geocities Of Things. He calls it Stuff That Talks To The Interwebs, or STTTTI, or ST4I. He needed some guinea pigs to test his workshop material on, so Clearleft volunteered as tribute.

In short, it was great! And this time, I didn’t stop hacking when I got home.

First off, every workshop attendee gets a hand-picked box of goodies to play with and keep: an arduino mega, a wifi shield, sensors, screens, motors, lights, you name it. That’s the hardware side of things. There are also code samples and libraries that Seb has prepared in advance.

Getting ready to workshop with @Seb_ly. Unwrapping some Christmas goodies from Santa @Seb_ly.

Now, remember, I lack even the most basic knowledge of electronics, but after two days of fiddling with this stuff, it started to click.

Blinkenlights. Hello, little fella.

On the first workshop day, we all did the same exercises, connected things up, getting them to talk to the internet, that kind of thing. For the second workshop day, Seb encouraged us to think about what we might each like to build.

I was quite taken with the ability of the piezo buzzer to play rudimentary music. I started to wonder if there was a way to hook it up to The Session and have it play the latest jigs, reels, and hornpipes that have been submitted to the site in ABC notation. A little bit of googling revealed that someone had already taken a stab at writing an ABC parser for arduino. I didn’t end up using that code, but it convinced me that what I was trying to do wasn’t crazy.

So I built a machine that plays Irish traditional music from the internet.

Playing with hardware and software, making things that go beep in the night.

The hardware has a piezo buzzer, an “on” button, an “off” button, a knob for controlling the speed of the tune, and an obligatory LED.

The software has a countdown timer that polls a URL every minute or so. The URL is http://tune.adactio.com/. That in turn uses The Session’s read-only API to grab the latest tune activity and then get the ABC notation for whichever tune is at the top of that list. Then it does some cleaning up—removing some of the more advanced ABC stuff—and outputs a single line of notes to be played. I’m fudging things a bit: the device has the range of a tin whistle, and expects tunes to be in the key of D or G, but seeing as that’s at least 90% of Irish traditional music, it’s good enough.

Whenever there’s a new tune, it plays it. Or you can hit the satisfying “on” button to manually play back the latest tune (and yes, you can hit the equally satisfying “off” button to stop it). Being able to adjust the playback speed with a twiddly knob turns out to be particularly handy if you decide to learn the tune.

I added one more lo-fi modification. I rolled up a piece of paper and placed it over the piezo buzzer to amplify the sound. It works surprisingly well. It’s loud!

Rolling my own speaker cone, quite literally.

I’ll keep tinkering with it. It’s fun. I realise I’m coming to this whole hardware-hacking thing very late, but I get it now: it really does feel similar to that feeling you would get when you first figured out how to make a web page back in the days of Geocities. I’ve built something that’s completely pointless for most people, but has special meaning for me. It’s ugly, and it’s inefficient, but it works. And that’s a great feeling.

(P.S. Seb will be running his workshop again on the 3rd and 4th of February, and there will a limited amount of early-bird tickets available for one hour, between 11am and midday this Thursday. I highly recommend you grab one.)

Code refactoring for America

Here at Clearleft, we’ve been doing some extra work with Code for America following on from our initial deliverables. This makes me happy for a number of reasons:

  1. They’re a great client—really easy-going and fun to work with.
  2. We’ve got Anna back in the office and it’s always nice to have her around.
  3. We get to revisit the styleguide we provided, and test our assumptions.

That last one is important. When we provide a pattern library to a client, we hope that they’ve got everything they need. If we’ve done our job right, then they’ll be able to combine patterns in ways we haven’t foreseen to create entirely new page types.

For the most part, that’s been the case with Code for America. They have a solid set of patterns that are serving them well. But what’s been fascinating is to hear about what it’s like for the people using those patterns…

There’s been a welcome trend in recent years towards extremely robust, maintainable CSS. SMACSS, BEM, OOCSS and other methodologies might differ in their details, but their fundamental approach is pretty similar. The idea is that you apply a very specific class to every element you want to style:

<div class="thingy">
    <ul class="thingy-bit">
        <li class="thingy-bit-item"></li>
        <li class="thingy-bit-item"></li>
    <img class="thingy-wotsit" src="" alt="" />

That allows you to keep your CSS selectors very short, but very specific:

.thingy {}
.thingy-bit {}
.thingy-bit-item {}
.thingy-wotsit {}

There’s little or no nesting, and you only ever use class selectors. That keeps your CSS nice and clear, and you avoid specificity hell. The catch is that your HTML is necessarily more verbose: you need to explicitly add a class to whatever you want to style.

For most projects—particularly product work (think Twitter, Facebook, etc.)—that’s a completely acceptable trade-off. It’s usually the same developers editing the CSS and the HTML so there’s no problem moving complexity out of CSS and into the markup templates. Even if other people will be entering the actual content into the system, they’ll probably be doing that mediated through a Content Management System, rather than editing HTML directly.

So nine times out of ten, making the HTML more verbose is absolutely the right choice in order to make the CSS more manageable and maintainable. That’s the way we initially built the pattern library for Code for America.

Well, it turns out that the people using the markup patterns aren’t necessarily the same people who would be dealing with the CSS. Also, there isn’t necessarily a CMS involved. Instead, people (volunteers, employees, anyone really) create new pages by copying and pasting the patterns we’ve provided and then editing them.

By optimising on the CSS side of things, we’ve offloaded a lot of complexity onto their shoulders. While it’s fair enough to expect them to understand basic HTML, it’s hardly fair to expect them to learn a whole new vocabulary of thingy and thingy-wotsit class names just to get things to look they way they expect.

Here’s a markup pattern that makes more sense for the people actually dealing with the HTML:

<div class="thingy">
    <img src="" alt="" />

Much clearer. But now the CSS looks like this:

.thingy {}
.thingy ul {}
.thingy li {}
.thingy img {}

Actually it’s probably going to look more complicated than that: more nesting, more element selectors, more “defensive” rules trying to anticipate the kind of markup that might be used in a particular pattern.

It feels really strange for Anna and myself to work with these kind of patterns. All of our experience screams “Don’t do that! Why would you that?” …but in this case, it’s the right thing to do for the people building the actual website.

So please don’t interpret this as me saying “Hey, everyone, this is how you should write your CSS.” I’m not saying this is better or worse than adding lots of classes to your HTML. If anything, this illustrates that there is no one right way to do this.

It’s worth remembering why we’re aiming for maintainability in what we write. It’s not for any technical reason. It’s for people. If those people find it better to deal with simplified CSS with more complex HTML, than the complexity should be in the HTML. But if the priority for those people is to have simple HTML, then more complex CSS may be an acceptable price to pay.

In other words, it depends.

Notes from the edge

I went up to London for the Edge Conference on Friday. It’s not your typical conference. Instead of talks, there are panels, but not the crap kind, where nobody says anything of interest: these panels are ruthlessly curated and prepared. There’s lots of audience interaction too, but again, not the crap kind, where one or two people dominate the discussion with their own pet topics: questions are submitted ahead of time, and then you are called upon to ask it at the right moment. It’s like Question Time for the web.


The first panel was on that hottest of topics: Web Components. Peter Gasston kicked it off with a superb introduction to the subject. Have a read of his equally-excellent article in Smashing Magazine to get the gist.

Needless to say, this panel covered similar ground to the TAG meetup I attended a little while back, and left me with similar feelings: I’m equal parts excited and nervous; optimistic and worried. If Web Components work out, and we get a kind emergent semantics of UI widgets, it’ll be a huge leap forward for the web. But if we end up with a Tower of Babel, things could get very messy indeed. We’ll probably get both at once. And I think that’ll be (mostly) okay.

I butted into the discussion when the topic of accessibility came up. I was a little worried about what I was hearing, which was mainly, “Oh, ARIA takes care of the accesibility.” I felt like Web Components were passing the buck to ARIA, which would be fine if it weren’t for the fact that ARIA can’t cover all the possible use-cases of Web Components.

I chatted about this with Derek and Nicole during the break, but I’m not sure if I was articulating my thoughts very well, so I’ll have another stab at it here:

Let me set the scene for Web Components…

Historically, HTML has had a limited vocubalary for expressing interface widgets—mostly a bunch of specialised form fields like, say, the select element. The plus side is that there’s a consensus of understanding among the browsers, so you don’t have to explain what a select element does; the browsers already know. The downside is that whenever we want to add a new interface element like input type="range", it takes time to get into browsers and through the standards process. Web Components allow you to conjure up interface elements, and you don’t have to lobby browser makers or standards groups in order to make browsers understand your newly-minted element: you provide all the behavioural and styling instructions in one bundle.

So Web Components make use of HTML, JavaScript, and (scoped) CSS. The possibility space for the HTML is infinite: if you need an element that doesn’t exist, you just invent it. The possibility space for the JavaScript is pretty close to infinite: it’s a Turing-complete language that can be wrangled to do just about anything. The possibility space for CSS isn’t infinite, but it’s pretty darn big: there’s not much you can’t do with it at this point.

What’s missing from that bundle of HTML, JavaScript, and CSS are hooks for assistive technology. Up until now, this is something we’ve mostly left to the browser. We don’t have to include any hooks for assistive technology when we use a select element because the browser knows what it is and can expose that knowledge to the assistive technology. If we’re going to start making up our own interface elements, we now have to take on the responsibility of providing that information to assistive technology.

How do we that? Well, right now, our only option is to use ARIA …but the possibility space defined by ARIA is much, much smaller than HTML, JavaScript, or CSS.

That’s not a criticism of ARIA: that’s the way it was designed. It’s a reactionary technology, designed to plug the gaps where the native semantics of HTML just don’t cut it. The vocabulary of ARIA was created by looking at the kinds of interface elements people are making—tabs, sliders, and so on. That’s fine, but it can’t scale to keep pace with Web Components.

The problem that Web Components solve—the fact that it currently takes too long to get a new interface element into browsers—doesn’t have a corresponding solution when it comes to accessibility hooks. Just adding more and more predefined ARIA roles won’t cut it—we need some kind of extensible accessibility that matches the expressive power of Web Components. We don’t need a bigger vocabulary in ARIA, we need a way to define our own vocabulary—an extensible ARIA, if you will.

Hmmm… I’m still not sure I’m explaining myself very well.

Anyway, I just want to make sure that accessibility doesn’t get left behind (again!) in our rush to create a new solution to our current problems. With Web Components still in their infancy, this feels like the right time to raise these concerns.

That highlights another issue, one that Nicole picked up on. It’s really important that the extensible web community and the accessibility community talk to each other.

Frankly, the accessibility community can be its own worst enemy sometimes. So don’t get me wrong: I’m not bringing up my concerns about the accessibility of Web Components in order to cry “fail!”—I just want to make sure that it’s on the table (and I’m glad that Alex is one of the people driving Web Components—his history with Dojo reassures me that we can push the boundaries of interface widgets on the web without leaving accessibility behind).

Anyway …that’s enough about that. I haven’t mentioned all the other great discussions that took place at Edge Conference.

Developer Tooling

The Web Components panel was followed by a panel on developer tools. This was dominated by representatives from different browsers, each touting their own set of in-browser tools. But the person who I really wanted to rally behind was Kenneth Auchenberg. He quite rightly asks why our developer tools and our text editors are two different apps. And rather than try to put text editors into developer tools, what we really want is to pull developer tools into our text editors …all the developer tools from all the browsers, not just one set of developer tools from one specific browser.

If you haven’t seen Kenneth’s presentation from Full Frontal, I urge you to watch it or listen to it.

I had my hand up to jump into the discussion towards the end, but time ran out so I didn’t get a chance. Paul came over afterwards and asked what I was going to say. Here’s what I told him…

I’m fascinated by the social dynamics around how browsers get made. This is an area where different companies are simultaneously collaborating and competing.

Broadly speaking, the feature set of a web browser can be divided into two buckets:

In one bucket, you’ve got the support for standards like HTML, CSS, JavaScript. Now, individual browsers might compete on how quickly or how thoroughly they get those standards implemented, but at this point, there’s no disagreement about the fact that proprietary crap is bad, standards are good, and that no matter how painful the process can be, browser makers all need to get together and work on standards together. Heck, even Apple can’t avoid collaborating on this stuff.

In the other bucket, you’ve got all the stuff that browsers compete against each other with: speed, security, the user interface, etc. A lot of this takes place behind closed doors, and that’s fine. There’s no real need for browser makers to collaborate on this stuff, and it could even hurt their competetive advantage if they did collaborate.

But here’s the problem; developer tools seem to be coming out of that second bucket instead of the first. There doesn’t seem to be much communication between the browser makers on developer tools. That’s fine if you see developer tools as an opportunity for competition, but it’s lousy if you see developer tools as an opportunity for interoperability.

This is why Kenneth’s work is so important. He’s crying out for more interoperability between browsers when it comes to developer tools. Why can’t they all use the same low-level APIs under the hood? Then they can still compete on how pretty their dev tools look, without making life miserable for developers who want to move quickly between browsers.

As painful as it might be, I think that browser makers should get together in some semi-formalised way to standardise this stuff. I don’t think that the W3C or the WHATWG are necessarily the right places for this kind of standardisation, but any kind of official cooperation would be good.

Build Process

The panel on build processes for front-end development kicked off with Gareth saying a few words. Some of those words included the sentence:

Make is probably older than you.

Cue glares from me and Scott.

Gareth also said that making websites means making software. We’re all making software—live with it.

This made me nervous. I’ve always felt that one of the great strengths of the web has been its low barrier to entry. The idea of a web that can only be made by qualified software developers doesn’t sound like a good thing to me.

Fortunately, things got cleared up later on. Somebody else asked a question about whether the barrier to entry was being raised by the complexity of tools like preprocessors, compilers, and transpilers. The consensus of the panel was that these are power tools for power users. So if someone were learning to make a website from scratch, you wouldn’t start them off with, say, Sass, without first learning CSS.

It was a fun panel, made particulary enjoyable by the presence of Kyle Simpson. I like the cut of his jib. Alas, I didn’t get the chance to tell him that in person. I had to duck out of the afternoon’s panels to get back to Brighton due to unforeseen family circumstances. But I did manage to catch some of the later panels on the live stream.

Closing thoughts

A common thread I noticed amongst many of the panels was a strong bias for decantralisation, rather than collaboration. That was most evident with Web Components—the whole point is that you can make up your own particular solution rather than waiting for a standards body. But it was also evident in the Developer Tools line-up, where each browser maker is reinventing the same wheels. And when it came to Build Process, it struck me that everyone is scratching their own itch instead of getting together to work on an itch solution.

There’s nothing wrong with that kind of Darwinian approach to solving our problems, but it does seem a bit wasteful. Mairead Buchan was at Edge Conference too and she noticed the same trend. Sounds like she’s going to do something about it too.

Making progress

When I was talking about Async, Ajax, and animation, I mentioned the little trick I’ve used of generating a progress element to indicate to the user that an Ajax request is underway.

I sometimes use the same technique even if Ajax isn’t involved. When a form is being submitted, I find it’s often good to provide explicit, immediate feedback that the submission is underway. Sure, the browser will do its own thing but a browser doesn’t differentiate between showing that a regular link has been clicked, and showing that all those important details you just entered into a form are on their way.

Here’s the JavaScript I use. It’s fairly simplistic, and I’m limiting it to POST requests only. At the moment that a form begins to submit, a progress element is inserted at the end of the form …which is usually right by the submit button that the user will have just pressed.

While I’m at it, I also set a variable to indicate that a POST submission is underway. So even if the user clicks on that submit button multiple times, only one request is set.

You’ll notice that I’m attaching an event to each form element, rather than using event delegation to listen for a click event on the parent document and then figuring out whether that click event was triggered by a submit button. Usually I’m a big fan of event delegation but in this case, it’s important that the event I’m listening to is the submit event. A form won’t fire that event unless the data is truly winging its way to the server. That means you can do all the client-side validation you want—making good use of the required attribute where appropriate—safe in the knowledge that the progess element won’t be generated until the form has passed its validation checks.

If you like this particular pattern, feel free to use the code. Better yet, improve upon it.

Pattern sharing

Mike has written about the Code for America alpha website that we collaborated on:

We chose to work with ClearLeft because they develop a pattern portfolio (a pattern/style library) which would allow us to scale our work to our Brigades. This unique approach has aligned perfectly with our work style and decentralized organizational structure.

Thankfully, I think the approach of delivering a pattern portfolio (instead of just pages) isn’t so unique these days. Mind you, it still seems to be more common with in-house teams than agencies. The Mailchimp pattern library is a classic example.

But agencies like Paravel are—like Clearleft—delivering systems, not pages. Dave wrote about providing responsive deliverables:

Responsive deliverables should look a lot like fully-functioning Twitter Bootstrap-style systems custom tailored for your clients’ needs.

I think that’s a good way of looking at it: a Bootstrap for every project.

Here’s the front-end style guide for Code for America.

Usually these front-end deliverables will be password-protected on the Clearleft extranet for the client’s eyes only, but Code for America are all about openness, so they’re more than willing to let us share it with the world. That makes me very happy. I remember encouraging the guys at Starbucks to publish their front-end style guide and I’ve written about this spirit of sharing before:

These style guides and pattern libraries aren’t being published in an attempt to provide ready-made solutions—every project should have its own distinct pattern library. Instead, these pattern libraries are being published in a spirit of openness and sharing …a way of saying “Hey, this is what worked for us in these particular circumstances.”

If you’re poking around the Code for America style guide, you’ll notice that it borrows some ideas from the pattern primer idea I published a while back. But in this iteration, the markup is available via a toggle—a nice variation. There’s also a patchwork page that provides a nice glance-able uninterrupted view of the same patterns.

Every project is a learning experience and each front-end style guide gives us ideas about how to do the next one better. In fact, Mark is busy working on better internal tools for creating these kinds of deliverables—something we’ll definitely be sharing. In the meantime, I’ll be encouraging other clients to be as open as Code for America have been in allowing us to share these deliverables.

For more on the usefulness of front-end style guides, be sure to read Paul’s article on style guides for the web, Anna’s classic 24 Ways article, and of course, Anna’s pocket guide from Five Simple Steps.

Coding for America

Back when I was wandering around America in August, I mentioned that I met up with Mike Migurski in San Francisco:

I played truant from UX Week this morning to meet up with Mike for a coffee and a chat at Cafe Vega. We were turfed out when the bearded, baseball-capped, Draplinesque barista announced he had to shut the doors because he needed to “run out for some milk.” So we went around the corner to the Code For America office.

It wasn’t just a social visit. Mike wanted to chat about the possibility of working with Clearleft. The Code for America site was being overhauled. The new site needed to communicate directly with volunteers, rather than simply being a description of what Code for America does. But the site also needed to be able to change and adapt as the organisation’s activities expanded. So what they needed was not a set of page designs; they needed a system of modular components that could be assembled in a variety of ways.

This was music to my ears. This sort of systems-thinking is exactly the kind of work that Clearleft likes to get its teeth into. I showed Mike some of the previous work we had done in creating pattern libraries, and it became pretty clear that this was just what they were looking for.

When I got back to Brighton, Clearleft assembled as small squad to work on the project. Jon would handle the visual design, with the branding work of Dojo4 as a guide. For the front-end coding, we brought in some outside help. Seeing as the main deliverable for this project was going to be a front-end style guide, who better to put that together than the person who literally wrote the book on front-end style guides: Anna.

I’ll go into more detail about the technical side of things on the Clearleft blog (and we’ll publish the pattern library), but for now, let me just say that the project was a lot of fun, mostly because the people we were working with at Code for America—Mike, Dana, and Cyd—were so ridiculously nice and easy-going.

Anna and Jon would start the day by playing the unofficial project theme song and then get down to working side-by-side. By the end of the day here in Brighton, everyone was just getting started in San Francisco. So the daily “stand up” conference call took place at 5:30pm our time; 9:30am their time. The meetings rarely lasted longer than 10 or 15 minutes, but the constant communication throughout the project was invaluable. And the time difference actually worked out quite nicely: we’d tell them what we had been working on during our day, and if we needed anything from them; then they could put that together during their day so it was magically waiting for us by the next morning.

It’ll be a while yet before the new site rolls out, but in the meantime they’ve put together an alpha site—with a suitably “under construction” vibe—so that anyone can help out with the code and content by making contributions to the github repo.

A map to build by

The fifth and final Build has just wrapped up in Belfast. As always, it delivered an excellent day of thought-provoking talks.

It felt like some themes emerged, not just from this year, but from the arc of the last five years. More than one speaker tapped into a feeling that I’ve had for a while that the web has changed. The web has grown up. Unfortunately, it has grown up to be kind of a dickhead.

There were many times during the day’s talks at Build that I was reminded of Anil Dash’s The Web We Lost. Both Jason and Frank pointed to the imbalance of power on the web, where the bottom line has become more important than the user. It’s a landscape dominated by The Stacks—Google, Facebook, et al.—and by fly-by-night companies who have no interest in being good web citizens, and even less interest in the data that they’re sucking from their users.

Don’t get me wrong: I’m not saying that companies shouldn’t be interested in making money—that’s what companies do. But prioritising profit above all else is not going to result in a stable society. And the web is very much part of the fabric of society now. Still, the web is young enough to have escaped the kind of regulation that “real world” companies would be subjected to. Again, don’t get me wrong: I don’t want top-down regulation. What I want is some common standards of decency amongst web companies. If the web ends up getting regulated because of repeated acts of abuse, it will be a tragedy of the commons on an unprecedented scale.

I realise that sounds very gloomy and doomy, and I don’t want to give the impression that Build was a downer—it really wasn’t. As the last ever speaker at Build, Frank ended on a note of optimism. Sure, the way we think about the web now is filled with negative connotations: it appears money-grabbing, shallow, and locked down. But that doesn’t mean that the web is inherently like that.

Harking back to Ethan’s fantastic talk at last year’s Build, Frank made the point that our map of the web makes it seem a grim place, but the territory of the web isn’t necessarily a lost cause. What we need is a better map. A map of openness, civility, and—something that’s gone missing from the web’s younger days—a touch of wildness.

I take comfort from that. I take comfort from that because we are the map makers. The worst thing that could happen would be for us to fatalistically accept the negative turn that the web has taken as inevitable, as “just the way things are.” If the web has grown up to be a dickhead, it’s because we shaped it that way, either through our own actions or inactions. But the web hasn’t finished growing. We can still shape it. We can make it less of a dickhead. At the very least, we can acknowledge that things can and should be better.

I’m not sure exactly how we go about making a better map for the web. I have a vague feeling that it involves tapping into the kind of spirit that informs places like CERN—the kind of spirit that motivated the creation of the web itself. I have a feeling that making a better map for the web doesn’t involve forming startups and taking venture capital. Neither do I think that a map for a better web will emerge from working at Google, Facebook, Twitter, or any of the current incumbents.

So where do we start? How do we begin to attempt to make a better web without getting overwehlmed by the enormity of the task?

Perhaps the answer comes from one of the other speakers at this year’s Build. In a beautifully-delivered presentation, Paul Soulellis spoke about resistance:

How do we, as an industry of creative professionals, reconcile the fact that so much of what we make is used to perpetuate the demands of a bloated marketplace? A monoculture?

He spoke about resisting the intangible nature of digital work with “thingness”, and resisting the breakneck speed of the network with slowness. Perhaps we need our own acts of resistance if we want to change the map of the web.

I don’t know what those acts of resistance are. Perhaps publishing on your own website is an act of resistance—one that’s more threatening to the big players than they’d like to admit. Perhaps engaging in civil discourse online is an act of resistance.

Like I said, I don’t know. But I really appreciate the way that this year’s Build has pushed me into asking these uncomfortable questions. Like the web, Build has grown up over the years. Unlike the web, Build turned out just fine.

Told you so

One of the recurring themes at the Responsive Day Out was how much of a sea change responsive design is. More than once, it was compared to the change we went through going from table layouts to CSS …but on a much bigger scale.

Mark made the point that designing in a liquid way, rather than using media queries, is the real challenge for most people. I think he’s right. I think there’s an over-emphasis on media queries and breakpoints when we talk about responsive design. Frankly, media queries are, for me, the least interesting aspect. And yet, I often hear “media queries” and “responsive design” used interchangeably, as if they were synonyms.

Embracing the fluidity of the medium: that’s the really important bit. I agree with Mark’s assessment that the reason why designers and developers are latching on to media queries and breakpoints is a desire to return to designing for fixed canvases:

What started out as a method to optimise your designs for various screen widths has turned, ever so slowly, into multiple canvas design.

If you’re used to designing fixed-width layouts, it’s going to be really, really hard to get your head around designing and building in a fluid way …at first. In his talk, Elliot made the point that it will get easier once you get the hang of it:

Once you overcome that initial struggle of adapting to a new process, designing and building responsive sites needn’t take any longer, or cost any more money. The real obstacle is designers and developers being set in their ways. I know this because I was one of those people, and to those of you who’ve now fully embraced RWD, you may well be nodding in agreement: we all struggled with it to begin with, just like we did when we moved from table-based layout to CSS.

This is something I’ve been repeating again and again: we’re the ones who imposed the fixed-width constraint onto the medium. If we had listened to John Allsopp and embraced the web for the inherently fluid medium it is, we wouldn’t be having such a hard time getting our heads around responsive design.

But I feel I should clarify something. I’ve been saying “we” have been building fixed-width sites. That isn’t strictly true. I’ve never built a fixed-width website in my life.

Some people find this literally unbelievable. On the most recent Happy Mondays podcast, Sarah said:

I doubt anyone can hold their hands up and say they’ve exclusively worked in fluid layouts since we moved from tables.

Well, my hand is up. And actually, I was working with fluid layouts even when we were still using tables for layout: you can apply percentages to tables too.

Throughout my career, even if the final site was going to be fixed width, I’d still build it in a fluid way, using percentages for widths. At the very end, I’d slap on one CSS declaration on the body to fix the width to whatever size was fashionable at the time: 760px, 960px, whatever …that declaration could always be commented out later if the client saw the light.

Actually, I remember losing work back when I was a freelancer because I was so adamant that a site should be fluid rather than fixed. I was quite opinionated and stubborn on that point.

A search through the archives of my journal attests to that:

Way back in 2003, I wrote:

It seems to me that, all too often, designers make the decision to go with a fixed width design because it is the easier path to tread. I don’t deny that liquid design can be hard. To make a site that scales equally well to very wide as well as very narrow resolutions is quite a challenge.

In 2004, I wrote:

Cast off your fixed-width layouts; you have nothing to lose but your WYSIWYG mentality!

I just wouldn’t let it go. I said:

So maybe I should be making more noise. I could become the web standards equivalent of those loonies with the sandwich boards, declaiming loudly that the end is nigh.

At my very first South by Southwest in 2005, in a hotel room at 5am, when I should’ve been partying, I was explaining to Keith why liquid layouts were the way to go.

Fixed width vs Liquid

That’s just sad.

So you’ll forgive me if I feel a certain sense of vindication now that everyone is finally doing what I’ve been banging on about for years.

I know that it’s very unbecoming of me to gloat. But if you would indulge me for a moment…



I’m sorry. That was very undignified. It’s just that, after TEN BLOODY YEARS, I just had to let it out. It’s not often I get to do that.

Now, does anyone want to revisit the discussion about having comments on blogs?