Tags: medium



Research on evaluating technology

I’ve spent the past few months preparing a new talk for An Event Apart San Francisco (and hopefully some more AEAs after that). As always happens, I spent the whole time vacillating between thinking “this is good!” and thinking “this is awful!” I’m still bouncing between those poles. I won’t really know whether the talk is up to snuff until I actually give it to a live audience.

Over the past few years, my presentations have been upon one another. Two years ago, my talk was called Enhance! and it set the groundwork for using a layered approach to web design and development. My 2016 talk, Resilience, follows on with a process and examples for that approach (I also set myself the challenge of delivering a talk about progressive enhancement without ever using the phrase “progressive enhancement”).

My new talk goes a bit meta, but in my mind, it’s very much building on the previous talks. The talk is all about evaluating technology. I haven’t settled on a final title, but I was thinking about something obtuse, like …Evaluating Technology.

Here’s my hastily scribbled description:

We work with technology every day. And every day it seems like there’s more and more technology to understand: graphic design tools, build tools, frameworks and libraries, not to mention new HTML, CSS and JavaScript features landing in browsers. How should we best choose which technologies to invest our time in? When we decide to weigh up the technology choices that confront us, what are the best criteria for doing that? This talk will help you evaluate tools and technologies in a way that best benefits the people who use the websites that we are designing and developing. Let’s take a look at some of the hottest new web technologies like service workers and web components. Together we will dig beneath the hype to find out whether they will really change life on the web for the better.

As ever, I’ll begin and end with a long-zoom pretentious arc of history, but I’ll dive into practical stuff in the middle. That’s become a bit of a cliché for my presentations, but the formula works as a sort of microcosm of a good conference—a mixture of the inspirational and the practical, trying to keep a good balance of both.

For this new talk, the practical focus will be on some web technologies that are riding high on the hype cycle right now: service workers, web components, progressive web apps. I’ll use them as a lens for applying broader questions about how we make decisions about the technologies we embrace, and the technologies we reject.

Technology. Now there’s a big subject. It’s literally the entirety of human history. I had to be careful not to go down too many rabbit holes. I’m still not sure if I’ve succeeded, but I’ve already had to ruthlessly cull some darlings.

One of the nice things that the An Event Apart crew started doing was to provide link lists for each talk to attendees. That gives me an opportunity to touch briefly on a topic in the talk itself, but allow any interested attendees to dive deeper at their leisure.

For this talk on evaluating technology, I’ve put together this list of hyperlinks for further reading, watching, listening, and researching…






Laurie Voss has written a thoughtful article called Web development has two flavors of graceful degradation in response to Nolan Lawson’s recent article. But I’m afraid I don’t agree with Laurie’s central premise:

…web app development and web site development are so different now that they probably shouldn’t be called the same thing anymore.

This is an idea I keep returning to, and each time I do, I find that it just isn’t that simple. There are very few web thangs that are purely interactive without any content, and there are also very few web thangs that are purely passive without any interaction. Instead, it’s a spectrum. Quite often, the position on that spectrum changes according to the needs of the user at any particular time—are Twitter and Flicker web sites while I’m viewing text and images, but then transmogrify into web apps the moment I want add, update, or delete a piece of text or an image?

In any case, the more interesting question than “is something a web site or a web app?” is the question “why?” Why does it matter? In my experience, the answer to that question generally comes down to the kind of architectural approach that a developer will take.

That’s exactly what Laurie dives into in his post. For web apps, use one architectural approach—for web sites, use a different architectural approach. To summarise:

  • in a web app, front-load everything and rely on client-side JavaScript for all subsequent interaction,
  • in a web site, optimise for many page loads, and make sure you don’t rely on client-side JavaScript.

I’m oversimplifying here, but the general idea is:

  • build web apps with the single page app architecture,
  • build web sites with progressive enhancement.

That’s sensible advice, but I’m worried that it could lead to a tautological definition of what constitutes a web app:

  1. This is a web app so it’s built as a single page app.
  2. Why do you define it as a web app?
  3. Because it’s built as a single page app.

The underlying question of what makes something a web app is bypassed by the architectural considerations …but the architectural considerations should be based on that underlying question. Laurie says:

If you are developing an app, the user ideally loads the app exactly once — whether it’s over a slow connection or not.

And similarly:

But if you are developing a web site consisting of many discrete pages, the act of loading goes from a single event to the most common event.

I completely agree that the architectural approach of single page apps is better suited to some kinds of web thangs more than others. It’s a poor architectural choice for a content-based site like nasa.gov, for example. Progressive enhancement would make more sense there.

But I don’t think that the architectural choices need to be in opposition. It’s entirely possible to reconcile the two. It’s not always easy—and the further along that spectrum you are, the tougher it gets—but it’s doable. You can begin with progressive enhancement, and then build up to a single page app architecture for more capable browsers.

I think that’s going to get easier as frameworks adopt a more mixed approach. Almost all the major libraries are working on server-side rendering as a default. Ember is leading the way with FastBoot, and Angular Universal is following. Neither of them are doing it for reasons of progressive enhancement—they’re doing it for performance and SEO—but the upshot is that you can more easily build a web app that simultaneously uses progressive enhancement and a single-page app model.

I guess my point is that I don’t think we should get too locked into the idea of web apps and web sites requiring fundamentally different approaches, especially with the changes in the technologies we used to build them.

We’ve made the mistake in the past of framing problems as “either/or”, when in fact, the correct solution was “both!”:

  • you can either have a desktop site or a mobile site,
  • you can either have rich interactivity or accessibility,
  • you can either have a single page app or progressive enhancement.

We don’t have to choose. It might take more work, but we can have our web cake and eat it.

The false dichotomy that I’m most concerned about is the pernicious idea that offline functionality is somehow in opposition to progressive enhancement. Given the design of service workers, I find this proposition baffling.

This remark by Tom is the very definition of a false dichotomy:

People who say your site should work without JavaScript are actually hurting the people they think they’re helping.

He was also linking to Nolan’s article, which could indeed be read as saying that you should for offline instead of building with progressive enhancement. But I don’t think that’s what Nolan is saying (at least, I sincerely hope not). I think that Nolan is saying that we should prioritise the offline scenario over scenarios where JavaScript fails or isn’t available. That’s a completely reasonable thing to say. But the idea that we should build for the offline scenario instead of scenarios where JavaScript fails is absurdly reductionist. We don’t have to choose!

But I can certainly understand how developers might come to be believe that building a progressive web app is at odds with progressive enhancement. Having made a bunch of progressive web apps—Huffduffer, The Session, this site, I can testify that service workers work superbly as a layer on top of an existing site, but all the messaging around progressive web apps seems to fixated on the idea of the app-shell model (a small tweak to the single page app model, where a little bit of interface is available on the initial page load instead of requiring JavaScript for absolutely everything). Again, it’s entirely possible to reconcile the app-shell approach with server rendering and progressive enhancement, but nobody seems to be talking about that. Instead, all of the examples and demos are built with an assumption about JavaScript availability.

Assumptions are the problem. Whether it’s assumptions about screen size, assumptions about being able-bodied, assumptions about network connectivity, or assumptions about browser capabilities, I don’t think any assumptions are a safe bet. Now you might quite reasonably say that we have to make some assumptions when we’re building on the web, and you’d be right. But I think we should still aim to keep them to a minimum.

Tom’s tweet included a screenshot of this part of Nolan’s article:

As Benedict Evans has noted, the next billion people who are poised to come online will be using the internet almost exclusively through smartphones. And if Google’s plans with Android One are any indication, then we have a fairly good idea of what kind of devices the “next billion” will be using:

  • They’ll mostly be running Android.
  • They’ll have decent specs (1GB RAM, quad-core processors).
  • They’ll have an evergreen browser and WebView (Android 5+).
  • What they won’t have, however, is a reliable internet connection.

Those seem like a reasonable set of assumptions. But even there, things aren’t so simple. Will people really be using “an evergreen browser and WebView”? Millions of people use proxy browsers like Opera Mini, which means you can’t guarantee JavaScript availability beyond the initial page load. UC Browser—which can also run in proxy mode—is now the second most popular mobile browser in the world.

That’s just one nit-picky example, but what I’m getting at here is that it really isn’t safe to make any assumptions. When we must make assumptions, let’s try to make them a last resort.

And just to be clear here, I’m not saying that just because we can’t make assumptions about devices or browsers doesn’t mean that we can’t build rich interactive web apps that work offline. I’m saying that we can build rich interactive web apps that work offline and also work when JavaScript fails or isn’t supported.

You don’t have to choose between progressive enhancement and a single page app/progressive web app/app shell/other things with the word “app”.

Progressive enhancement is an architectural approach to building on the web. You don’t have to use it, but please try to remember that it is your choice to make. You can choose to build a web app using progressive enhancement or not—there is nothing inherent in the nature of the thing you’re building that precludes progressive enhancement.

Personally, I find progressive enhancement a sensible way to counteract any assumptions I might inadvertently make. Progressive enhancement increases the chances that the web site (or web app) I’m building is resilient to the kind of scenarios that I never would’ve predicted or anticipated.

That’s why I choose to use progressive enhancement …and build progressive web apps.

The Rational Optimist

As part of my ongoing obsession with figuring out how we evaluate technology, I finally got around to reading Matt Ridley’s The Rational Optimist. It was an exasperating read.

On the one hand, it’s a history of the progress of human civilisation. Like Steven Pinker’s The Better Angels Of Our Nature, it piles on the data demonstrating the upward trend in peace, wealth, and health. I know that’s counterintuitive, and it seems to fly in the face of what we read in the news every day. Mind you, The New York Times took some time out recently to acknowledge the trend.

Ridley’s thesis—and it’s a compelling one—is that cooperation and trade are the drivers of progress. As I read through his historical accounts of the benefits of open borders and the cautionary tales of small-minded insular empires that collapsed, I remember thinking, “Boy, he must be pretty upset about Brexit—his own country choosing to turn its back on trade agreements with its neighbours so that it could became a small, petty island chasing the phantom of self-sufficiency”. (Self-sufficiency, or subsistence living, as Ridley rightly argues throughout the book, correlates directly with poverty.)

But throughout these accounts, there are constant needling asides pointing to the perceived enemies of trade and progress: bureaucrats and governments, with their pesky taxes and rule of law. As the accounts enter the twentieth century, the gloves come off completely revealing a pair of dyed-in-the-wool libertarian fists that Ridley uses to pummel any nuance or balance. “Ah,” I thought, “if he cares more about the perceived evils of regulation than the proven benefits of trade, maybe he might actually think Brexit is a good idea after all.”

It was an interesting moment. Given the conflicting arguments in his book, I could imagine him equally well being an impassioned remainer as a vocal leaver. I decided to collapse this probability wave with a quick Google search, and sure enough …he’s strongly in favour of Brexit.

In theory, an author’s political views shouldn’t make any difference to a book about technology and progress. In practice, they barge into the narrative like boorish gatecrashers threatening to derail it entirely. The irony is that while Ridley is trying to make the case for rational optimism, his own personal political feelings are interspersed like a dusting of irrationality, undoing his own well-researched case.

It’s not just the argument that suffers. Those are the moments when the writing starts to get frothy, if not downright unhinged. There were a number of confusing and ugly sentences that pulled me out of the narrative and made me wonder where the editor was that day.

The last time I remember reading passages of such poor writing in a non-fiction book was Nassim Nicholas Taleb’s The Black Swan. In the foreword, Taleb provides a textbook example of the Dunning-Kruger effect by proudly boasting that he does not need an editor.

But there was another reason why I thought of The Black Swan while reading The Rational Optimist.

While Ridley’s anti-government feelings might have damaged his claim to rationality, surely his optimism is unassailable? Take, for example, his conclusions on climate change. He doesn’t (quite) deny that climate change is real, but argues persuasively that it won’t be so bad. After all, just look at the history of false pessimism that litters the twentieth century: acid rain, overpopulation, the Y2K bug. Those turned out okay, therefore climate change will be the same.

It’s here that Ridley succumbs to the trap that Taleb wrote about in his book: using past events to make predictions about inherently unpredictable future events. Taleb was talking about economics—warning of the pitfalls of treating economic data as though it followed a bell-curve curve, when it fact it’s a power-law distribution.

Fine. That’s simply a logical fallacy, easily overlooked. But where Ridley really lets himself down is in the subsequent defence of fossil fuels. Or rather, in his attack on other sources of energy.

When recounting the mistakes of the naysayers of old, he points out that their fundamental mistake is to assume stasis. Hence their dire predictions of war, poverty, and famine. Ehrlich’s overpopulation scare, for example, didn’t account for the world-changing work of Borlaug’s green revolution (and Ridley rightly singles out Norman Borlaug for praise—possibly the single most important human being in history).

Yet when it comes to alternative sources of energy, they are treated as though they are set in stone, incapable of change. Wind and solar power are dismissed as too costly and inefficient. The Rational Optimist was written in 2008. Eight years ago, solar energy must have indeed looked like a costly investment. But things have changed in the meantime.

As Matt Ridley himself writes:

It is a common trick to forecast the future on the assumption of no technological change, and find it dire. This is not wrong. The future would indeed be dire if invention and discovery ceased.

And yet he fails to apply this thinking when comparing energy sources. If anything, his defence of fossil fuels feels grounded in a sense of resigned acceptance; a sense of …pessimism.

Matt Ridley rejects any hope of innovation from new ideas in the arena of energy production. I hope that he might take his own words to heart:

By far the most dangerous, and indeed unsustainable thing the human race could do to itself would be to turn off the innovation tap. Not inventing, and not adopting new ideas, can itself be both dangerous and immoral.


My site has been behaving strangely recently. It was nothing that I could put my finger on—it just seemed to be acting oddly. When I checked to see if everything was okay, I was told that everything was fine, but still, I sensed something that was amiss.

I’ve just realised what it was. Last week on the 30th of September, I didn’t do or say anything special. That was the problem. I had forgotten my blog’s anniversary.

I’m so sorry, adactio.com! Honestly, I had been thinking about it for all of September but then on the day, one thing led to another, I was busy, and it just completely slipped my mind.

So this is a bit late, but anyway …happy fifteenth anniversary to this journal!

We’ve been through a lot together in those fifteen years, haven’t we, /journal? Oh, the places we’ve been and the things we’ve seen!

I remember where we were on our tenth anniversary: Bologna. Remember we were there for the first edition of the From The Front conference? Now, five years on, we’ve just been to the final edition of that same event—a bittersweet occasion.

Like I said five years ago:

It has been a very rewarding, often cathartic experience so far. I know that blogging has become somewhat passé in this age of Twitter and Facebook but I plan to keep on keeping on right here in my own little corner of the web.

I should plan something special for September 30th, 2021 …just to make sure I don’t forget.


In the latest issue of Justin’s excellent Responsive Web Design weekly newsletter, he includes a segment called “The Snippet Show”:

This is what tells all our browsers on all our devices to set the viewport to be the same width of the current device, and to also set the initial scale to 1 (not scaled at all). This essentially allows us to have responsive design consistently.

<meta name="viewport" content="width=device-width, initial-scale=1">

The viewport value for the meta element was invented by Apple when the iPhone was released. Back then, it was a safe bet that most websites were wider than the iPhone’s 320 pixel wide display—most of them were 960 pixels wide …because reasons. So mobile Safari would automatically shrink those sites down to fit within the display. If you wanted to over-ride that behaviour, you had to use the meta viewport gubbins that they made up.

That was nine years ago. These days, if you’re building a responsive website, you still need to include that meta element.

That seems like a shame to me. I’m not suggesting that the default behaviour should switch to assuming a fluid layout, but maybe the browser could just figure it out. After all, the CSS will already be parsed by the time the HTML is rendering. Perhaps a quick test for the presence of a crawlbar could be used to trigger the shrinking behaviour. No crawlbar, no shrinking.

Maybe someday the assumption behind the current behaviour could be flipped—assume a website is responsive unless the author explicitly requests the shrinking behaviour. I’d like to think that could happen soon, but I suspect that a depressingly large number of sites are still fixed-width (I don’t even want to know—don’t tell me).

There are other browser default behaviours that might someday change. Right now, if I type example.com into a browser, it will first attempt to contact http://example.com rather than https://example.com. That means the example.com server has to do a redirect, costing the user valuable time.

You can mitigate this by putting your site on the HSTS preload list but wouldn’t it be nice if browsers first checked for HTTPS instead of HTTP? I don’t think that will happen anytime soon, but someday …someday.

Indie Web Camp Brighton 2016

Indie Web Camp Brighton 2016 is done and dusted. It’s hard to believe that it’s already in its fifth(!) year. As with previous years, it was a lot of fun.


The first day—the discussions day—covered a lot of topics. I led a session on service workers, where we brainstormed offline and caching strategies for personal websites.

There was a design session looking at alternatives to simply presenting everything in a stream. Some great ideas came out of that. And there was a session all about bookmarking and linking. That one really got my brain whirring with ideas for the second day—the making/coding day.

I’ve learned from previous Indie Web Camps that a good strategy for the second day is to have two tasks to tackle: one that’s really easy (so you’ve at least got that to demo at the end), and one that’s more ambitious. This time, I put together a list of potential goals, and then ordered them by difficulty. By the end of the day, I managed to get a few of them done.

First off, I added a small bit of code to my bookmarking flow, so that any time I link to something, I send a ping to the Internet Archive to grab a copy of that URL. So here’s a link I bookmarked to one of Remy’s blog posts, and here it is in the Wayback Machine—see how the date of storage matches the date of my link.

The code to do that was pretty straightforward. I needed to hit this endpoint:


I also updated my bookmarklet for posting links so that, if I’ve highlighted any text on the page I’m linking to, that text is automatically pasted in to the description.

I tweaked my webmentions a bit so that if I receive a webmention that has a type of bookmark-of, that is displayed differently to a comment, or a like, or a share. Here’s an example of Aaron bookmarking one of my articles.

The more ambitious plan was to create an over-arching /tags area for my site. I already have tag-based navigation for my journal and my links:

But until this weekend, I didn’t have the combined view:

I didn’t get around to adding pagination. That’s something I should definitely add, because some of those pages get veeeeery long. But I did spend some time adding sparklines. They can be quite revealing, especially on topics that were hot ten years ago, but have faded over time, or topics that have becoming more and more popular with each year.

All in all, a very productive weekend.

European tour

I’m recovering from an illness that laid me low a few weeks back. I had a nasty bout of man-flu which then led to a chest infection for added coughing action. I’m much better now, but alas, this illness meant I had to cancel my trip to Chicago for An Event Apart. I felt very bad about that. Not only was I reneging on a commitment, but I also missed out on an opportunity to revisit a beautiful city. But it was for the best. If I had gone, I would have spent nine hours in an airborne metal tube breathing recycled air, and then stayed in a hotel room with that special kind of air conditioning that hotels have that always seem to give me the sniffles.

Anyway, no point regretting a trip that didn’t happen—time to look forward to my next trip. I’m about to embark on a little mini tour of some lovely European cities:

  • Tomorrow I travel to Stockholm for Nordic.js. I’ve never been to Stockholm. In fact I’ve only stepped foot in Sweden on a day trip to Malmö to hang out with Emil. I’m looking forward to exploring all that Stockholm has to offer.
  • On Saturday I’ll go straight from Stockholm to Berlin for the View Source event organised by Mozilla. Looks like I’ll be staying in the east, which isn’t a part of the city I’m familiar with. Should be fun.
  • Alas, I’ll have to miss out on the final day of View Source, but with good reason. I’ll be heading from Berlin to Bologna for the excellent From The Front conference. Ah, I remember being at the very first one five years ago! I’ve made it back every second year since—I don’t need much of an excuse to go to Bologna, one of my favourite places …mostly because of the food.

The only downside to leaving town for this whirlwind tour is that there won’t be a Brighton Homebrew Website Club tomorrow. I feel bad about that—I had to cancel the one two weeks ago because I was too sick for it.

But on the plus side, when I get back, it won’t be long until Indie Web Camp Brighton on Saturday, September 24th and Sunday, September 25th. If you haven’t been to an Indie Web Camp before, you should really come along—it’s for anyone who has their own website, or wants to have their own website. If you have been to an Indie Web Camp before, you don’t need me to convince you to come along; you already know how good it is.

Sign up for Indie Web Camp Brighton here. It’s free and it’s a lot of fun.

The importance of owning your data is getting more awareness. To grow it and help people get started, we’re meeting for a bar-camp like collaboration in Brighton for two days of brainstorming, working, teaching, and helping.

The imitation game

Jason shared some thoughts on designing progressive web apps. One of the things he’s pondering is how much you should try make your web-based offering look and feel like a native app.

This was prompted by an article by Owen Campbell-Moore over on Ev’s blog called Designing Great UIs for Progressive Web Apps. He begins with this advice:

Start by forgetting everything you know about conventional web design, and instead imagine you’re actually designing a native app.

This makes me squirm. I mean, I’m all for borrowing good ideas from other media—native apps, TV, print—but I don’t think that inspiration should mean imitation. For me, that always results in an interface that sits in a kind of uncanny valley of being almost—but not quite—like the thing it’s imitating.

With that out of the way, most of the recommendations in Owen’s article are sensible ideas about animation, input, and feedback. But then there’s recommendation number eight: Provide an easy way to share content:

PWAs are often shown in a context where the current URL isn’t easily accessible, so it is important to ensure the user can easily share what they’re currently looking at. Implement a share button that allows users to copy the URL to the clipboard, or share it with popular social networks.

See, when a developer has to implement a feature that the browser should be providing, that seems like a bad code smell to me. This is a problem that Opera is solving (and Google says it is solving, while meanwhile penalising developers who expose the URL to end users).

Anyway, I think my squeamishness about all the advice to imitate native apps is because it feels like a cargo cult. There seems to be an inherent assumption that native is intrinsically “better” than the web, and that the only way that the web can “win” is to match native apps note for note. But that misses out on all the things that only the web can do—instant distribution, low-friction sharing, and the ability to link to any other resource on the web (and be linked to in turn). Turning our beautifully-networked nodes into standalone silos just because that’s the way that native apps have to work feels like the cure that kills the patient.

If anything, my advice for building a progressive web app would be the exact opposite of Owen’s: don’t forget everything you’ve learned about web design. In my opinion, the term “progressive web app” can be read in order of priority:

  1. Progressive—build in a layered way so that anyone can access your content, regardless of what device or browser they’re using, rewarding the more capable browsers with more features.
  2. Web—you’re building for the web. Don’t lose sight of that. URLs matter. Accessibility matters. Performance matters.
  3. App—sure, borrow what works from native apps if it makes sense for your situation.

Jason asks questions about how your progressive web app will behave when it’s added to the home screen. How much do you match the platform? How do you manage going chromeless? And the big one: what do users expect?

Will people expect an experience that maps to native conventions? Or will they be more accepting of deviation because they came to the app via the web and have already seen it before installing it?

These are good questions and I share Jason’s hunch:

My gut says that we can build great experiences without having to make it feel exactly like an iOS or Android app because people will have already experienced the Progressive Web App multiple times in the browser before they are asked to install it.

In all the messaging from Google about progressive web apps, there’s a real feeling that the ability to install to—and launch from—the home screen is a real game changer. I’m not so sure that we should be betting the farm on that feature (the offline possibilities opened up by service workers feel like more of a game-changer to me).

People have been gleefully passing around the statistic that the average number of native apps installed per month is zero. So how exactly will we measure the success of progressive web apps against native apps …when the average number of progressive web apps installed per month is zero?

I like Android’s add-to-home-screen algorithm (although it needs tweaking). It’s a really nice carrot to reward the best websites with. But let’s not carried away. I think that most people are not going to click that “add to home screen” prompt. Let’s face it, we’ve trained people to ignore prompts like that. When someone is trying to find some information or complete a task, a prompt that pops up saying “sign up to our newsletter” or “download our native app” or “add to home screen” is a distraction to be dismissed. The fact that only the third example is initiated by the operating system, rather than the website, is irrelevant to the person using the website.

Getting the “add to home screen” prompt for https://huffduffer.com/ on Android Chrome.

My hunch is that the majority of people will still interact with your progressive web app via a regular web browser view. If, then, only a minority of people are going to experience your site launched from the home screen in a native-like way, I don’t think it makes sense to prioritise that use case.

The great thing about progressive web apps is that they are first and foremost websites. Literally everyone who interacts with your progressive web app is first going to do so the old-fashioned way, by following a link or typing in a URL. They may later add it to their home screen, but that’s just a bonus. I think it’s important to build progressive web apps accordingly—don’t pretend that it’s just like building a native app just because some people will be visiting via the home screen.

I’m worried that developers are going to think that progressive web apps are something that need to built from scratch; that you have to start with a blank slate and build something new in a completely new way. Now, there are some good examples of these kind of one-off progressive web apps—The Guardian’s RioRun is nicely done. But I don’t think that the majority of progressive web apps should fall into that category. There’s nothing to stop you taking an existing website and transforming it step-by-step into a progressive web app:

  1. Switch over to HTTPS if you aren’t already.
  2. Use a service worker, even if it’s just to provide a custom offline page and cache some static assets.
  3. Make a manifest file to point to an icon and specify some colours.

See? Not exactly a paradigm shift in how you approach building for the web …but those deceptively straightforward steps will really turbo-boost your site.

I’m really excited about progressive web apps …but mostly for the “progressive” and “web” parts. Maybe I’ll start calling them progressive web sites. Or progressive web thangs.

Marking up help text in forms

Zoe asked a question on Twitter recently:

‘Sfunny—I had been pondering this exact question. In fact, I threw a CodePen together a couple of weeks ago.

Visually, both examples look the same; there’s a label, then a form field, then some extra text (in this case, a validation message).

The first example puts the validation message in an em element inside the label text itself, so I know it won’t be missed by a screen reader—I think I first learned this technique from Derek many years ago.

<div class="first error example">
 <label for="firstemail">Email
<em class="message">must include the @ symbol</em>
 <input type="email" id="firstemail" placeholder="e.g. you@example.com">

The second example puts the validation message after the form field, but uses aria-describedby to explicitly associate that message with the form field—this means the message should be read after the form field.

<div class="second error example">
 <label for="secondemail">Email</label>
 <input type="email" id="secondemail" placeholder="e.g. you@example.com" aria-describedby="seconderror">
 <em class="message" id="seconderror">must include the @ symbol</em>

In both cases, the validation message won’t be missed by screen readers, although there’s a slight difference in the order in which things get read out. In the first example we get:

  1. Label text,
  2. Validation message,
  3. Form field.

And in the second example we get:

  1. Label text,
  2. Form field,
  3. Validation message.

In this particular example, the ordering in the second example more closely matches the visual representation, although I’m not sure how much of a factor that should be in choosing between the options.

Anyway, I was wondering whether one of these two options is “better” or “worse” than the other. I suspect that there isn’t a hard and fast answer.

Why do pull quotes exist on the web?

There you are reading an article when suddenly it’s interrupted by a big piece of text that’s repeating something you just read in the previous paragraph. Or it’s interrupted by a big piece of text that’s spoiling a sentence that you are about to read in subsequent paragraphs.

There you are reading an article when suddenly it’s interrupted by a big piece of text that’s repeating something you just read in the previous paragraph.

To be honest, I find pull quotes pretty annoying in printed magazines too, but I can at least see the justification for them there: if you’re flipping through a magazine, they act as eye-catching inducements to stop and read (in much the same way that good photography does or illustration does). But once you’re actually reading an article, they’re incredibly frustrating.

You either end up learning to blot them out completely, or you end up reading the same sentence twice.

You either end up learning to blot them out completely, or you end up reading the same sentence twice. Blotting them out is easier said than done on a small-screen device. At least on a large screen, pull quotes can be shunted off to the side, but on handheld devices, pull quotes really make no sense at all.

Are pull quotes online an example of a skeuomorph? “An object or feature which imitates the design of a similar artefact made from another material.”

I think they might simply be an example of unexamined assumptions. The default assumption is that pull quotes on the web are fine, because everyone else is doing pull quotes on the web. But has anybody ever stopped to ask why? It was this same spiral of unexamined assumptions that led to the web drowning in a sea of splash pages in the early 2000s.

I think they might simply be an example of unexamined assumptions.

I’m genuinely curious to hear the design justification for pull quotes on the web (particularly on mobile), because as a reader, I can give plenty of reasons for their removal.

Exploring web technologies

Last week, I had two really enjoyable experiences discussing completely opposite ends of the web technology stack.

Tuesday is Codebar day here in Brighton. Clearleft hosted it at 68 Middle Street last week. I really, really enjoy coaching at Codebar. I particularly like teaching the absolute basics of HTML and CSS. There’s something so rewarding about seeing the “a-ha!” moments when concepts click with people. I also love answering the inevitable questions that arise, like “why is it like that?”, or “how do I do this?”

Fantastic coding tonight! Great to see you all. Thanks for coming and thanks @68MiddleSt & @clearleft for having us.

Thursday was devoted to the opposite end of the spectrum. I ran a workshop at Clearleft with some developers from one of our clients. The whole day was dedicated to exploring and evaluating up-and-coming web technologies. Basically, it was a chance to geek out about all the stuff I’ve been linking to or writing about. During the workshop I ended up making a lot of use of my tagging system here on adactio.com:

Prioritising topics for discussion.

Web components and service workers ended up at the top of the list of technologies to tackle, which was fortuitous, given my recent thoughts on comparing the two:

First of all, ask the question “who benefits from this technology?” In the case of service workers, it’s the end users. They get faster websites that handle network failure better. In the case of web components, there are no direct end-user benefits. Web components exist to make developers lives easier. That’s absolutely fine, but any developer convenience gained by the use of web components can’t come at the expense of the user—that price is too high.

The next question we usually ask when we’re evaluating a technology is “how well does it work?” Personally, I think it’s just as important to ask “how well does it fail?”

Those two questions turned out to be a good framework for the whole workshop. The question of how to evaluate technologies is something I’ve been thinking about a lot lately. I’m pretty sure it will be what my next conference talk is going to be all about.

You can read more about the structure of the workshop over on the Clearleft site. I’m looking forward to running it again sometime. But I’m equally looking forward to getting back to the basics at the next Codebar.

Extensible web components

Adam Onishi has written up his thoughts on web components and progressive enhancements, following on from a discussion we were having on Slack. He shares a lot of the same frustrations as I do.

Two years ago, I said:

I have conflicting feelings about Web Components. I am simultaneously very excited and very nervous.

I still feel that way. In theory, web components are very exciting. In practice, web components are very worrying. The worrying aspect comes from the treatment of backwards compatibility.

It all comes down to the way custom elements work. When you make up a custom element, it’s basically a span.


Then, using JavaScript with ShadowDOM, templates, and the other specs that together make up the web components ecosystem, you turn that inert span-like element into something all-singing and dancing. That’s great if the browser supports those technologies, and the JavaScript executes successfully. But if either of those conditions aren’t met, what you’re left with is basically a span.

One of the proposed ways around this was to allow custom elements to extend existing elements (not just spans). The proposed syntax for this was an is attribute.

<select is="fancy-select">...</select>

Browser makers responded to this by saying “Nah, that’s too hard.”

To be honest, I had pretty much given up on the is functionality ever seeing the light of day, but Monica has rekindled my hope:

Still, I’m not holding my breath for this kind of declarative extensibility landing in browsers any time soon. Instead, a JavaScript-based way of extending existing existing elements is currently the only way of piggybacking on all the accessible behavioural goodies you get with native elements.

class FancySelect extends HTMLSelectElement

But this imperative approach fails completely if custom elements aren’t supported, or if the JavaScript fails to execute. Now you’re back to having spans.

The presentation on web components at the Progressive Web Apps Dev Summit referred to this JavaScript-based extensibility as “progressively enhancing what’s already available”, which is a bit of a stretch, given how completely it falls apart in older browsers. It was kind of a weird talk, to be honest. After fifteen minutes of talking about creating elements entirely from scratch, there was a minute or two devoted to the is attribute and extending existing elements …before carrying as though those two minutes never happened.

But even without any means of extending existing elements, it should still be possible to define custom elements that have some kind of fallback in non-supporting browsers:


In that situation, you at least get a regular ol’ select element in older browsers (or in modern browsers before the JavaScript kicks in and uplifts the custom element).

Adam has a great example of this in his post:

I’ve been thinking of a gallery component lately, where you’d have a custom element, say <o-gallery> for want of a better example, and simply populate it with images you want to display, with custom elements and shadow DOM you can add all the rest, controls/layout etc. Markup would be something like:

 <img src="">
 <img src="">
 <img src="">

If none of the extra stuff loads, what do we get? Well you get 3 images on the page. You still get the content, but just none of the fancy interactivity.

Yes! This, in my opinion, is how we should be approaching the design of web components. This is what gets me excited about web components.

Then I look at pretty much all the examples of web components out there and my nervousness kicks in. Hardly any of them spare a thought for backwards-compatibility. Take a look, for example, at the entire contents of the body element for the Polymer Shop demo site:

<shop-app unresolved="">SHOP</shop-app>

This seems really odd to me, because I don’t think it’s a good way to “sell” a technology.

Compare service workers to web components.

First of all, ask the question “who benefits from this technology?” In the case of service workers, it’s the end users. They get faster websites that handle network failure better. In the case of web components, there are no direct end-user benefits. Web components exist to make developers lives easier. That’s absolutely fine, but any developer convenience gained by the use of web components can’t come at the expense of the user—that price is too high.

The next question we usually ask when we’re evaluating a technology is “how well does it work?” Personally, I think it’s just as important to ask “how well does it fail?”

Service workers work well and fail well. If a browser supports service workers, the user gets all the benefits. If a browser doesn’t support service workers, the user get the same experience they would have always had.

Web components (will) work well, but fail badly. If a browser supports web components, the user gets the experience that the developer has crafted using these new technologies. If a browser doesn’t support web components, the user gets …probably nothing. It depends on how the web components have been designed.

It’s so much easier to get excited about implementing service workers. You’ve literally got nothing to lose and everything to gain. That’s not the case with web components. Or at least not with the way they are currently being sold.

See, this is why I think it’s so important to put some effort into designing web components that have some kind of fallback. Those web components will work well and fail well.

Look at the way new elements are designed for HTML. Think of complex additions like canvas, audio, video, and picture. Each one has been designed with backwards-compatibility in mind—there’s always a way to provide fallback content.

Web components give us developers the same power that, up until now, only belonged to browser makers. Web components also give us developers the same responsibilities as browser makers. We should take that responsibility seriously.

Web components are supposed to be the poster child for The Extensible Web Manifesto. I’m all for an extensible web. But the way that web components are currently being built looks more like an endorsement of The Replaceable Web Manifesto. I’m not okay with a replaceable web.

Here’s hoping that my concerns won’t be dismissed as “piffle and tosh” again by the very people who should be thinking about these issues.


I’ve noticed a few nice examples of motion design on the web lately.

The Cloud Four gang recently redesigned their site, including a nice little animation on the home page.

Malcolm Gladwell has a new podcast called Revisionist History. The website for the podcast is quite lovely. Each episode is illustrated with an animated image. Lovely!

If you want to see some swishy animations triggered by navigation, the waaark websites has them a-plenty. Personally I find the scroll-triggered animations on internal pages too much to take (I have yet to find an example of scrolljacking that doesn’t infuriate me). But the homepage illustrations have some lovely subtle movement.

When it comes to subtlety in animation, my favourite example comes from Charlotte. She recently refactored the homepage of the website for the Leading Design conference. It originally featured one big background image. Switching over to SVG saved a lot of bandwidth. But what I really love is that the shapes in the background are now moving …ever so gently.

It’s like gazing at a slow-motion lava lamp of geometry.

Class teacher

ES6 introduced a whole bunch of new features to JavaScript. One of those features is the class keyword. This introduction has been accompanied by a fair amount of concern and criticism.

Here’s the issue: classes in JavaScript aren’t quite the same as classes in other programming languages. In fact, technically, JavaScript doesn’t really have classes at all. But some say that technically isn’t important. If it looks like a duck, and quacks like a duck, shouldn’t we call it a duck even if technically it’s somewhat similar—but not quite the same—species of waterfowl?

The argument for doing this is that classes are so familiar from other programming languages, that having some way of using classes in JavaScript—even if it isn’t technically the same as in other languages—brings a lot of benefit for people moving over to JavaScript from other programming languages.

But that comes with a side-effect. Anyone learning about classes in JavaScript will basically be told “here’s how classes work …but don’t look too closely.”

Now if you believe that outcomes matter more than understanding, then this is a perfectly acceptable trade-off. After all, we use computers every day without needing to understand the inner workings of every single piece of code under the hood.

It doesn’t sit well with me, though. I think that understanding how something works is important (in most cases). That’s why I favour learning underlying technologies first—HTML, CSS, JavaScript—before reaching for abstractions like frameworks and libraries. If you understand the way things work first, then your choice of framework, library, or any other abstraction is an informed choice.

The most common way that people refer to the new class syntax in JavaScript is to describe it as syntactical sugar. In other words, it doesn’t fundamentally introduce anything new under the hood, but it gives you a shorter, cleaner, nicer way of dealing with objects. It’s an abstraction. But because it’s an abstraction taken from other programming languages that work differently to JavaScript, it’s a bit of fudge. It’s a little white lie. The class keyword in JavaScript will work just fine as long as you don’t try to understand it.

My personal opinion is that this isn’t healthy.

I’ve come across two fantastic orators who cemented this view in my mind. At Render Conf in Oxford earlier this year, I had the great pleasure of hearing Ashley Williams talk about the challenges of teaching JavaScript. Skip to the 15 minute mark to hear her introduce the issues thrown up classes in JavaScript.

More recently, the mighty Kyle Simpson was on an episode of the JavaScript Jabber podcast. Skip to the 17 minute mark to hear him talk about classes in JavaScript.

(Full disclosure: Kyle also some very kind things about some of my blog posts at the end of that episode, but you can switch it off before it gets to that bit.)

Both Ashley and Kyle bring a much-needed perspective to the discussion of language design. That perspective is the perspective of a teacher.

In his essay on W3C’s design principles, Bert Bos lists learnability among the fundamental driving forces (closely tied to readability). Learnability and teachability are two sides of the same coin, and I find it valuable to examine any language decisions through that lens. With that mind, introducing a new feature into a language that comes with such low teachability value as to warrant a teacher actively telling a student not to learn how things really work …well, that just doesn’t seem right.

Save the dates for Indie Web Camp Brighton 2016

September 24th and 25th—those are the dates you should put in your diary. That’s when this year’s Indie Web Camp Brighton is happening.

Once again it’ll be at 68 Middle Street, home to Clearleft. You can register for free now, and then add your name to the list of participants on the wiki.

If you haven’t been to an Indie Web Camp before, it’s a very straightforward proposition. The idea is that you should have your own website. That’s it. Every thing else is predicated on that. So while there’ll be plenty of discussions, demos, and designs, they’re all in service to that fundamental premise.

The first day of an Indie Web Camp is like a BarCamp. We make a schedule grid at the start of the day and people organise topics by room and time slot. It sounds chaotic. It is chaotic. But it works surprisingly well. The discussions can be about technologies, or interfaces, or ideas, or just about anything really.

The second day is for making. After the discussions from the previous day, most people will have a clear idea at this point for something they might want to do. It might involve adding some new technology to their website, or making some design changes, or helping build a tool. For people starting from scratch, this is the perfect time for them to build and launch a basic website.

At the end of the second day, everyone demos what they’ve done. I’m always amazed by how much people can accomplish in just one weekend. There’s something about having other people around to help you that makes it super productive.

You might be thinking “but I’m not a coder!” Don’t worry—there’ll be plenty of coders there so you can get their help on whatever you might decide to do. If you’re a designer, your skills will be in high demand by those coders. It’s that mish-mash of people that makes it such a fun gathering.

Last year’s Indie Web Camp Brighton was lots of fun. Let’s make Indie Web Camp Brighton 2016 even better!

Indie Web Camp Brighton group photo

Backdoor Service Workers

When I was moderating that panel at the Progressive Web App dev Summit, I brought up this point about twenty minutes in:

Alex, in your talk yesterday you were showing the AMP demo there with the Washington Post. You click through and there’s the Washington Post AMP thing, and it was able to install the Service Worker with that custom element. But I was looking at the URL bar …and that wasn’t the Washington Post. It was on the CDN from AMP. So I talked to Paul Backaus from the AMP team, and he explained that it’s an iframe, and using an iframe you can install a Service Worker from somewhere else.

Alex and Emily explained that, duh, that’s the way iframes work. It makes sense when you think about it—an iframe is pretty much the same as any other browser window. Still, it feels like it might violate the principle of least surprise.

Let’s say you followed my tongue-in-cheek advice to build a progressive web app store. Your homepage might have the latest 10 or 20 progressive web apps. You could also include 10 or 20 iframes so that those sites are “pre-installed” for the person viewing your page.

Enough theory. Here’s a practical example…

Suppose you’ve never visited the website for my book, html5forwebdesigners.com (if you have visited it, and you want to play along with this experiment, go to your browser settings and delete anything stored by that domain).

You happen to visit my website adactio.com. There’s a little blurb buried down on the home page that says “Read my book” with a link through to html5forwebdesigners.com. I’ve added this markup after the link:

<iframe src="https://html5forwebdesigners.com/iframe.html" style="width: 0; height: 0; border: 0">

That hidden iframe pulls in an empty page with a script element:

<!DOCTYPE html>
<html lang="en">
<meta charset="utf-8">
<title>HTML5 For Web Designers</title>
if ('serviceWorker' in navigator) {

That registers the Service Worker on my book’s site which then proceeds to install all the assets it needs to render the entire site offline.

There you have it. Without ever visiting the domain html5forwebdesigners.com, the site has been pre-loaded onto your device because you visited the domain adactio.com.

A few caveats:

  1. I had to relax the Content Security Policy for html5forwebdesigners.com to allow the iframe to be embedded on adactio.com:

    Header always set Access-Control-Allow-Origin: "https://adactio.com"
  2. If your browser’s settings has “Block third-party cookies and site data” selected in the preferences, the iframe-invoked Service Worker won’t install:

    Uncaught (in promise) DOMException: Failed to register a ServiceWorker: The user denied permission to use Service Worker.

The example I’ve put together here is relatively harmless. But it’s possible to imagine more extreme scenarios. Imagine there’s a publishing company that has 50 websites for 50 different publications. Each one of them could have an empty page waiting to be embedded via iframe from the other 49 sites. You only need to visit one page on one of those 50 sites to have 50 Service Workers spun up and caching assets in the background.

There’s the potential here for a tragedy of the commons. I hope we’ll be sensible about how we use this power.

Just don’t tell the advertising industry about this.

Unlabelled search fields

Adam Silver is writing a book on forms—you may be familiar with his previous book on maintainable CSS. In a recent article (that for some reason isn’t on his blog), he looks at markup patterns for search forms and advocates that we should always use a label. I agree. But for some reason, we keep getting handed designs that show unlabelled search forms. And no, a placeholder is not a label.

I had a discussion with Mark about this the other day. The form he was marking up didn’t have a label, but it did have a button with some text that would work as a label:

<input type="search" placeholder="…">
<button type="submit">

He was wondering if there was a way of using the button’s text as the label. I think there is. Using aria-labelledby like this, the button’s text should be read out before the input field:

<input aria-labelledby="searchtext" type="search" placeholder="…">
<button type="submit" id="searchtext">

Notice that I say “think” and “should.” It’s one thing to figure out a theoretical solution, but only testing will show whether it actually works.

The W3C’s WAI tutorial on labelling content gives an example that uses aria-label instead:

<input type="text" name="search" aria-label="Search">
<button type="submit">Search</button>

It seems a bit of a shame to me that the label text is duplicated in the button and in the aria-label attribute (and being squirrelled away in an attribute, it runs the risk of metacrap rot). But they know what they’re talking about so there may well be very good reasons to prefer duplicating the value with aria-label rather than pointing to the value with aria-labelledby.

I thought it would be interesting to see how other sites are approaching this pattern—unlabelled search forms are all too common. All the markup examples here have been simplified a bit, removing class attributes and the like…

The BBC’s search form does actually have a label:

<label for="orb-search-q">
Search the BBC
<input id="orb-search-q" placeholder="Search" type="text">
<button>Search the BBC</button>

But that label is then hidden using CSS:

position: absolute;
height: 1px;
width: 1px;
overflow: hidden;
clip: rect(1px, 1px, 1px, 1px);

That CSS—as pioneered by Snook—ensures that the label is visually hidden but remains accessible to assistive technology. Using something like display: none would hide the label for everyone.

Medium wraps the input (and icon) in a label and then gives the label a title attribute. Like aria-label, a title attribute should be read out by screen readers, but it has the added advantage of also being visible as a tooltip on hover:

<label title="Search Medium">
  <span class="svgIcon"><svg></svg></span>
  <input type="search">

This is also what Google does on what must be the most visited search form on the web. But the W3C’s WAI tutorial warns against using the title attribute like this:

This approach is generally less reliable and not recommended because some screen readers and assistive technologies do not interpret the title attribute as a replacement for the label element, possibly because the title attribute is often used to provide non-essential information.

Twitter follows the BBC’s pattern of having a label but visually hiding it. They also have some descriptive text for the icon, and that text gets visually hidden too:

<label class="visuallyhidden" for="search-query">Search query</label>
<input id="search-query" placeholder="Search Twitter" type="text">
<span class="search-icon>
  <button type="submit" class="Icon" tabindex="-1">
    <span class="visuallyhidden">Search Twitter</span>

Here’s their CSS for hiding those bits of text—it’s very similar to the BBC’s:

.visuallyhidden {
  border: 0;
  clip: rect(0 0 0 0);
  height: 1px;
  margin: -1px;
  overflow: hidden;
  padding: 0;
  position: absolute;
  width: 1px;

That’s exactly the CSS recommended in the W3C’s WAI tutorial.

Flickr have gone with the aria-label pattern as recommended in that W3C WAI tutorial:

<input placeholder="Photos, people, or groups" aria-label="Search" type="text">
<input type="submit" value="Search">

Interestingly, neither Twitter or Flickr are using type="search" on the input elements. I’m guessing this is probably because of frustrations with trying to undo the default styles that some browsers apply to input type="search" fields. Seems a shame though.

Instagram also doesn’t use type="search" and makes no attempt to expose any kind of accessible label:

<input type="text" placeholder="Search">
<span class="coreSpriteSearchIcon"></span>

Same with Tumblr:

<input tabindex="1" type="text" name="q" id="search_query" placeholder="Search Tumblr" autocomplete="off" required="required">

…although the search form itself does have role="search" applied to it. Perhaps that helps to mitigate the lack of a clear label?

After that whistle-stop tour of a few of the web’s unlabelled search forms, it looks like the options are:

  • a visually-hidden label element,
  • an aria-label attribute,
  • a title attribute, or
  • associate some text using aria-labelledby.

But that last one needs some testing.

Update: Emil did some testing. Looks like all screen-reader/browser combinations will read the associated text.

Talking about hypertext

#CSSday starts off with a great history lesson of our industry by @adactio

I’ve just published a transcript of the talk I gave at the HTML Special that preceded CSS Day a couple of weeks back. I’ve also recorded an audio version for your huffduffing pleasure.

It’s not like the usual talks I give. The subject matter was assigned to me, Mission Impossible style. PPK wanted each speaker to give an entire talk on just one HTML element. He offered me the best element of them all: the A element.

There were a few different directions I could’ve taken it. I could’ve tried to make it practical, but I quickly dismissed that idea. Instead I went in the completely opposite direction, making it as pretentious as possible. I figured a talk about hypertext could afford to be winding and circuitous, building on some of the ideas I wrote about in my piece for The Manual a few years back. It’s quite self-indulgent of me, but I used it as an opportunity to geek out about some of my favourite things; from Borges, Babbage, and Bletchley to Leibniz, Lovelace, and Licklider.

I wouldn’t usually write out an entire talk word-for-word in advance, but somehow it felt right for this one. In fact, my talk preparation this time ‘round was very similar to the process Charlotte recently wrote about:

  1. Get everything out of my head and onto a mind map.
  2. Write chunks of content in short bursts—this was when I was buddying up with Paul.
  3. Put together a slide deck of visuals to support the narrative.
  4. Practice delivering the talk so I don’t look I’m just reading off a screen.

It takes me a long time to prepare talks. As the deadline for this one approached, I was getting quite panicked. It was touch and go there for a while, but I managed to get it done in time.

I’m pleased with how it turned out. On the day, I had fun delivering it. People seemed to like it too, which was gratifying.

Although with this kind of talk, it was inevitable that I wouldn’t be able to please everyone.

I guess this talk was a one-off affair. That said, if you’re putting on an event and you think this subject matter would be appropriate, let me know. I’d be more than happy to deliver it again.

On the side

My role at Clearleft is something along the lines of being a technical director. I’m not entirely sure what that means, but it seems to be a way of being involved in front-end development, without necessarily writing much actual code. That’s probably for the best. My colleagues Mark, Graham, and Charlotte are far more efficient at doing that. In return, I do my best to support them and make sure that they’ve got whatever they need (in terms of resources, time, and space) to get on with their work.

I’m continuously impressed not only by the quality of their output on client projects, but also by their output on the side.

Mark is working a project called Fractal. It’s a tool for creating component libraries, something he has written about before. The next steps involve getting the code to version 1.0 and completing the documentation. Then you’ll be hearing a lot more about this. The tricky thing right now is fitting it in around client work. It’s going to be very exciting though—everyone who has been beta-testing Fractal has had very kind words to say. It’s quite an impressive piece of work, especially considering that it’s the work of one person.

Graham is continuing on his crazily-ambitious project to recreate the classic NES game Legend Of Zelda using web technology. His documentation of his process is practically a book:

  1. Introduction,
  2. The Game Loop,
  3. Drawing to the Screen,
  4. Handling User Input,
  5. Scaling the Canvas,
  6. Animation — Part 1,
  7. Levels & Collision — part 1, and most recently
  8. Levels — part 2.

It’s simultaneously a project that involves the past—retro gaming—and the future—playing with the latest additions to JavaScript in modern browsers (something that feeds directly back into client work).

Charlotte has been speaking up a storm. She spoke at the Up Front conference in Manchester about component libraries:

The process of building a pattern library or any kind of modular design system requires a different approach to delivering a set of finished pages. Even when the final deliverable is a pattern library, we often still have to design pages for approval. When everyone is so used to working with pages, it can be difficult to adopt a new way of thinking—particularly for those who are not designers and developers.

This talk will look at how we can help everyone in the team adopt pattern thinking. This includes anyone with a decision to make—not just designers and developers. Everyone in the team can start building a shared vocabulary and attempt to make the challenge of naming things a little easier.

Then she spoke at Dot York about her learning process:

As a web developer, I’m learning all the time. I need to know how to make my code work, but more importantly, I want to understand why my code works. I’ve learnt most of what I know from people sharing what they know and I love that I can now do the same. In my talk I want to share my highlights and frustrations of continuous learning, my experiences of working with a mentor and fitting it into my first year at Clearleft.

She’ll also be speaking at Beyond Tellerrand in Berlin later this year. Oh, and she’s also now a co-organiser of the brilliant Codebar events that happen every Tuesday here in Brighton.

Altogether that’s an impressive amount of output from Clearleft’s developers. And all of that doesn’t include the client work that Mark, Graham, and Charlotte are doing. They inspire me!


At An Event Apart in Boston, I had the pleasure of meeting Hannah Birch from Pro Publica. It turns out that she was a copy editor in a previous life. I began gushing about the pleasure of working with a great editor.

I’ve been lucky enough to work with some of the best. Working with Mandy on HTML5 For Web Designers was wonderful. One of these days I hope to work with Owen Gregory.

When I think back on happy memories of working with world-class editors, I always a remember a Skype call about an article I was writing for The Manual. I talked with my editor for hours about the finer points of wordsmithery, completely losing track of time. It was a real joy. That editor was Carolyn Wood.

Carolyn is going through a bad time right now. A really bad time. A combination of awful medical problems combined with a Kafkaesque labyrinth of health insurance have combined to create a perfect shitstorm. I feel angry, sad, and helpless. At least I can do something about that last part. And you can too.

If you’d like to help, Karen has set up a page for contributing to help Carolyn. If you could throw a few bucks in there, I would appreciate it very much. Thank you.