Tags: medium

167

sparkline

Friday, March 24th, 2017

Code (p)reviews

I’m not a big fan of job titles. I’ve always had trouble defining what I do as a noun—I much prefer verbs (“I make websites” sounds fine, but “website maker” sounds kind of weird).

Mind you, the real issue is not finding the right words to describe what I do, but rather figuring out just what the heck it is that I actually do in the first place.

According to the Clearleft website, I’m a technical director. That doesn’t really say anything about what I do. To be honest, I tend to describe my work these days in terms of what I don’t do: I don’t tend to write a lot of HTML, CSS, and JavaScript on client projects (although I keep my hand in with internal projects, and of course, personal projects).

Instead, I try to make sure that the people doing the actual coding—Mark, Graham, and Danielle—are happy and have everything they need to get on with their work. From outside, it might look like my role is managerial, but I see it as the complete opposite. They’re not in service to me; I’m in service to them. If they’re not happy, I’m not doing my job.

There’s another aspect to this role of technical director, and it’s similar to the role of a creative director. Just as a creative director is responsible for the overall direction and quality of designs being produced, I have an oversight over the quality of front-end output. I don’t want to be a bottleneck in the process though, and to be honest, most of the time I don’t do much checking on the details of what’s being produced because I completely trust Mark, Graham, and Danielle to produce top quality code.

But I feel I should be doing more. Again, it’s not that I want to be a bottleneck where everything needs my approval before it gets delivered, but I hope that I could help improve everyone’s output.

Now the obvious way to do this is with code reviews. I do it a bit, but not nearly as much as I should. And even when I do, I always feel it’s a bit late to be spotting any issues. After all, the code has already been written. Also, who am I to try to review the code produced by people who are demonstrably better at coding than I am?

Instead I think it will be more useful for me to stick my oar in before a line of code has been written; to sit down with someone and talk through how they’re going to approach solving a particular problem, creating a particular pattern, or implementing a particular user story.

I suppose it’s really not that different to rubber ducking. Having someone to talk out loud with about potential solutions can be really valuable in my experience.

So I’m going to start doing more code previews. I think it will also incentivise me to do more code reviews—being involved in the initial discussion of a solution means I’m going to want to see the final result.

But I don’t think this should just apply to front-end code. I’d also like to exercise this role as technical director with the designers on a project.

All too often, decisions are made in the design phase that prove problematic in development. It usually works out okay, but it often means revisiting the designs in light of some technical considerations. I’d like to catch those issues sooner. That means sticking my nose in much earlier in the process, talking through what the designers are planning to do, and keeping an eye out for any potential issues.

So, as technical director, I won’t be giving feedback like “the colour’s not working for me” or “not sure about those type choices” (I’ll leave that to the creative director), but instead I can ask questions like “how will this work without hover?” or “what happens when the user does this?” as well as pointing out solutions that might be tricky or time-consuming to implement from a technical perspective.

What I want to avoid is the swoop’n’poop, when someone seagulls in after something has been designed or built and points out all the problems. The earlier in the process any potential issues can be spotted, the better.

And I think that’s my job.

Wednesday, March 22nd, 2017

Writing on the web

Some people have been putting Paul’s crazy idea into practice.

  • Mike revived his site a while back and he’s been posting gold dust ever since. I enjoy his no-holds-barred perspective on his time in San Francisco.
  • Garrett’s writing goes all the way back to 2005. The cumulative result is two fascinating interweaving narratives—one about his health, another about his business.
  • Charlotte has been documenting her move from Brighton to Sydney. Much as I love her articles about front-end development, I’m liking the slice-of-life updates on life down under even more.
  • Amber has a great way with words. As well as regularly writing on her blog, she’s two-thirds of the way through writing 100 words every day for 100 days.
  • Ethan has been writing about responsive design—of course—but it’s his more personal posts that make me really grateful for his site.
  • Jeffrey and Eric never stopped writing on their own sites. Sure, there’s good stuff on their about web design and development, but it’s the writing about their non-web lives that’s so powerful.

There are more people I could mention …but, to be honest, not that many more. Seems like most people are happy to only publish on Ev’s blog or not at all.

I know not everybody wants to write on the web, and that’s fine. But it makes me sad when people choose not to publish their thoughts because they think no-one will be interested, or that it’s all been said before. I understand where those worries come from, but I believe—no, I know—that they are unfounded.

It’s a world wide web out there. There’s plenty of room for everyone. And I, for one, love reading the words of others.

Wednesday, March 15th, 2017

Progressive Web App questions

I got a nice email recently from Colin van Eenige. He wrote:

For my graduation project I’m researching the development of Progressive Web Apps and found your offline book called resilient web design. I was very impressed by the implementation of the website and it really was a nice experience.

I’m very interested in your vision on progressive web apps and what capabilities are waiting for us regarding offline content. Would it be fine if I’d send you some questions?

I said that would be fine, although I couldn’t promise a swift response. He sent me four questions. I finally got ‘round to sending my answers…

1. https://resilientwebdesign.com/ is an offline web book (progressive web app). What was the primary reason make it available like this (besides the other formats)?

Well, given the subject matter, it felt right that the canonical version of the book should be not just online, but made with the building blocks of the web. The other formats are all nice to have, but the HTML version feels (to me) like the “real” book.

Interestingly, it wasn’t too much trouble for people to generate other formats from the HTML (ePub, MOBI, PDF), whereas I think trying to go in the other direction would be trickier.

As for the offline part, that felt like a natural fit. I had already done that with a previous book of mine, HTML5 For Web Designers, which I put online a year or two after its print publication. In that case, I used AppCache for the offline functionality. AppCache is horrible, but this use case might be one of the few where it works well: a static book that’s never going to change. Cache invalidation is one of the worst parts of using AppCache so by not having any kinds of updates at all, I dodged that bullet.

But when it came time for Resilient Web Design, a service worker was definitely the right technology. Still, I’ve got AppCache in there as well for the browsers that don’t yet support service workers.

2. What effect you you think Progressive Web Apps will have on content consuming and do you think these will take over the purpose of some Native Apps?

The biggest effect that service workers could have is to change the expectations that people have about using the web, especially on mobile devices. Right now, people associate the web on mobile with long waits and horrible spammy overlays. Service workers can help solve that first part.

If people then start adding sites to their home screen, that will be a great sign that the web is really holding its own. But I don’t think we should get too optimistic about that: for a user, there’s no difference between a prompt on their screen saying “add to home screen” and a prompt on their screen saying “download our app”—they’re equally likely to be dismissed because we’ve trained people to dismiss anything that covers up the content they actually came for.

It’s entirely possible that websites could start taking over much of the functionality that previously was only possible in a native app. But I think that inertia and habit will keep people using native apps for quite some time.

The big exception is in markets where storage space on devices is in short supply. That’s where the decision to install a native app isn’t taken likely (given the choice between your family photos and an app, most people will reject the app). The web can truly shine here if we build lightweight, performant services.

Even in that situation, I’m still not sure how many people will end up adding those sites to their home screen (it might feel so similar to installing a native app that there may be some residual worry about storage space) but I don’t think that’s too much of a problem: if people get to a site via search or typing, that’s fine.

I worry that the messaging around “progressive web apps” is perhaps over-fetishising the home screen. I don’t think that’s the real battleground. The real battleground is in people’s heads; how they perceive the web and how they perceive native.

After all, if the average number of native apps installed in a month is zero, then that’s not exactly a hard target to match. :-)

3. What is your vision regarding Progressive Web Apps?

For me, progressive web apps don’t feel like a separate thing from making websites. I worry that the marketing of them might inflate expectations or confuse people. I like the idea that they’re simply websites that have taken their vitamins.

So my vision for progressive web apps is the same as my vision for the web: something that people use every day for all sorts of tasks.

I find it really discouraging that progressive web apps are becoming conflated with single page apps and the app shell model. Those architectural decisions have nothing to do with service workers, HTTPS, and manifest files. Yet I keep seeing the concepts used interchangeably. It would be a real shame if people chose not to use these great technologies just because they don’t classify what they’re building as an “app.”

If anything, it’s good ol’ fashioned content sites (newspapers, wikipedia, blogs, and yes, books) that can really benefit from the turbo boost of service worker+HTTPS+manifest.

I was at a conference recently where someone was given a talk encouraging people to build progressive web apps but discouraging people from doing it for their own personal sites. That’s a horrible, elitist attitude. I worry that this attitude is being codified in the term “progressive web app”.

4. What is the biggest learning you’ve had since working on Progressive Web Apps?

Well, like I said, I think that some people are focusing a bit too much on the home screen and not enough on the benefits that service workers can provide to just about any website.

My biggest learning is that these technologies aren’t for a specific subset of services, but can benefit just about anything that’s on the web. I mean, just using a service worker to explicitly cache static assets like CSS, JS, and some images is a no-brainer for almost any project.

So there you go—I’m very excited about the capabilities of these technologies, but very worried about how they’re being “sold”. I’m particularly nervous that in the rush to emulate native apps, we end up losing the very thing that makes the web so powerful: URLs.

Monday, March 13th, 2017

In AMP we trust

AMP Conf was one of those deep dive events, with two days dedicated to one single technology: AMP.

Except AMP isn’t really one technology, is it? And therein lies the confusion. This was at the heart of the panel I was on. When we talk about AMP, we could be talking about one of three things:

  1. The AMP format. A bunch of web components. For instance, instead of using an img element on an AMP page, you use an amp-img element instead.
  2. The AMP rules. There’s one JavaScript file, hosted on Google’s servers, that turns those web components from spans into working elements. No other JavaScript is allowed. All your styles must be in a style element instead of an external file, and there’s a limit on what you can do with those styles.
  3. The AMP cache. The source of most confusion—and even downright enmity—this is what’s behind the fact that when you launch an AMP result from Google search, you don’t go to another website. You see Google’s cached copy of the page instead of the original.

The first piece of AMP—the format—is kind of like a collection of marginal gains. Where the img element might have some performance issues, the amp-img element optimises for perceived performance. But if you just used the AMP web components, it wouldn’t be enough to make your site blazingly fast.

The second part of AMP—the rules—is where the speed gains start to really show. You can’t have an external style sheet, and crucially, you can’t have any third-party scripts other than the AMP script itself. This is key to making AMP pages super fast. It’s not so much about what AMP does; it’s more about what it doesn’t allow. If you never used a single AMP component, but stuck to AMP’s rules disallowing external styles and scripts, you could easily make a page that’s even faster than what AMP can do.

At AMP Conf, Natalia pointed out that The Guardian’s non-AMP pages beat out the AMP pages for performance. So why even have AMP pages? Well, that’s down to the third, most contentious, part of the AMP puzzle.

The AMP cache turns the user experience of visiting an AMP page from fast to instant. While you’re still on the search results page, Google will pre-render an AMP page in the background. Not pre-fetch, pre-render. That’s why it opens so damn fast. It’s also what causes the most confusion for end users.

From my unscientific polling, the behaviour of AMP results confuses the hell out of people. The fact that the page opens instantly isn’t the problem—far from it. It’s the fact that you don’t actually go to an another page. Technically, you’re still on Google. An analogous mental model would be an RSS reader, or an email client: you don’t go to an item or an email; you view it in situ.

Well, that mental model would be fine if it were consistent. But in Google search, only some results will behave that way (the AMP pages) and others will behave just like regular links to other websites. No wonder people are confused! Some search results take them away and some search results keep them on Google …even though the page looks like a different website.

The price that we pay for the instantly-opening AMP pages from the Google cache is the URL. Because we’re looking at Google’s pre-rendered copy instead of the original URL, the address bar is not pointing to the site the browser claims to be showing. Everything in the body of the browser looks like an article from The Guardian, but if I look at the URL (which is what security people have been telling us for years is important to avoid being phished), then I’ll see a domain that is not The Guardian’s.

But wait! Couldn’t Google pre-render the page at its original URL?

Yes, they could. But they won’t.

This was a point that Paul kept coming back to: trust. There’s no way that Google can trust that someone else’s URL will play by the AMP rules (no external scripts, only loading embedded content via web components, limited styles, etc.). They can only trust the copies that they themselves are serving up from their cache.

By the way, there was a joint AMP/search panel at AMP Conf with representatives from both teams. As you can imagine, there were many questions for the search team, most of which were Glomar’d. But one thing that the search people said time and again was that Google was not hosting our AMP pages. Now I don’t don’t know if they were trying to make some fine-grained semantic distinction there, but that’s an outright falsehood. If I click on a link, and the URL I get taken to is a Google property, then I am looking at a page hosted by Google. Yes, it might be a copy of a document that started life somewhere else, but if Google are serving something from their cache, they are hosting it.

This is one of the reasons why AMP feels like such a bait’n’switch to me. When it first came along, it felt like a direct competitor to Facebook’s Instant Articles and Apple News. But the big difference, we were told, was that you get to host your own content. That appealed to me much more than having Facebook or Apple host the articles. But now it turns out that Google do host the articles.

This will be the point at which Googlers will say no, no, no, you can totally host your own AMP pages …but you won’t get the benefits of pre-rendering. But without the pre-rendering, what’s the point of even having AMP pages?

Well, there is one non-cache reason to use AMP and it’s a political reason. Beleaguered developers working for publishers of big bloated web pages have a hard time arguing with their boss when they’re told to add another crappy JavaScript tracking script or bloated library to their pages. But when they’re making AMP pages, they can easily refuse, pointing out that the AMP rules don’t allow it. Google plays the bad cop for us, and it’s a very valuable role. Sarah pointed this out on the panel we were on, and she was spot on.

Alright, but what about The Guardian? They’ve already got fast pages, but they still have to create separate AMP pages if they want to get the pre-rendering benefits when they show up in Google search results. Sorry, says Google, but it’s the only way we can trust that the pre-rendered page will be truly fast.

So here’s the impasse we’re at. Google have provided a list of best practices for making fast web pages, but the only way they can truly verify that a page is sticking to those best practices is by hosting their own copy, URLs be damned.

This was the crux of Paul’s argument when he was on the Shop Talk Show podcast (it’s a really good episode—I was genuinely reassured to hear that Paul is not gung-ho about drinking the AMP Kool Aid; he has genuine concerns about the potential downsides for the web).

Initially, I accepted this argument that Google just can’t trust the rest of the web. But the more I talked to people at AMP Conf—and I had some really, really good discussions with people away from the stage—the more I began to question it.

Here’s the thing: the regular Google search can’t guarantee that any web page is actually 100% the right result to return for a search. Instead there’s a lot of fuzziness involved: based on the content, the markup, and the number of trusted sources linking to this, it looks like it should be a good result. In other words, Google search trusts websites to—by and large—do the right thing. Sometimes websites abuse that trust and try to game the system with sneaky tricks. Google responds with penalties when that happens.

Why can’t it be the same for AMP pages? Let me host my own AMP pages (maybe even host my own AMP script) and then when the Googlebot crawls those pages—the same as it crawls any other pages—that’s when it can verify that the AMP page is abiding by the rules. If I do something sneaky and trick Google into flagging a page as fast when it actually isn’t, then take my pre-rendering reward away from me.

To be fair, Google has very, very strict rules about what and how to pre-render the AMP results it’s caching. I can see how allowing even the potential for a false positive would have a negative impact on the user experience of Google search. But c’mon, there are already false positives in regular search results—fake news, spam blogs. Googlers are smart people. They can solve—or at least mitigate—these problems.

Google says it can’t trust our self-hosted AMP pages enough to pre-render them. But they ask for a lot of trust from us. We’re supposed to trust Google to cache and host copies of our pages. We’re supposed to trust Google to provide some mechanism to users to get at the original canonical URL. I’d like to see trust work both ways.

Monday, March 6th, 2017

Empire State

I’m in New York. Again. This time it’s for Google’s AMP Conf, where I’ll be giving ‘em a piece of my mind on a panel.

The conference starts tomorrow so I’ve had a day or two to acclimatise and explore. Seeing as Google are footing the bill for travel and accommodation, I’m staying at a rather nice hotel close to the conference venue in Tribeca. There’s live jazz in the lounge most evenings, a cinema downstairs, and should I request it, I can even have a goldfish in my room.

Today I realised that my hotel sits in the apex of a triangle of interesting buildings: carrier hotels.

32 Avenue Of The Americas.Telephone wires and radio unite to make neighbors of nations

Looming above my hotel is 32 Avenue of the Americas. On the outside the building looks like your classic Gozer the Gozerian style of New York building. Inside, the lobby features a mosaic on the ceiling, and another on the wall extolling the connective power of radio and telephone.

The same architects also designed 60 Hudson Street, which has a similar Art Deco feel to it. Inside, there’s a cavernous hallway running through the ground floor but I can’t show you a picture of it. A security guard told me I couldn’t take any photos inside …which is a little strange seeing as it’s splashed across the website of the building.

60 Hudson.HEADQUARTERS The Western Union Telegraph Co. and telegraph capitol of the world 1930-1973

I walked around the outside of 60 Hudson, taking more pictures. Another security guard asked me what I was doing. I told her I was interested in the history of the building, which is true; it was the headquarters of Western Union. For much of the twentieth century, it was a world hub of telegraphic communication, in much the same way that a beach hut in Porthcurno was the nexus of the nineteenth century.

For a 21st century hub, there’s the third and final corner of the triangle at 33 Thomas Street. It’s a breathtaking building. It looks like a spaceship from a Chris Foss painting. It was probably designed more like a spacecraft than a traditional building—it’s primary purpose was to withstand an atomic blast. Gone are niceties like windows. Instead there’s an impenetrable monolith that looks like something straight out of a dystopian sci-fi film.

33 Thomas Street.33 Thomas Street, New York

Brutalist on the outside, its interior is host to even more brutal acts of invasive surveillance. The Snowden papers revealed this AT&T building to be a centrepiece of the Titanpointe programme:

They called it Project X. It was an unusually audacious, highly sensitive assignment: to build a massive skyscraper, capable of withstanding an atomic blast, in the middle of New York City. It would have no windows, 29 floors with three basement levels, and enough food to last 1,500 people two weeks in the event of a catastrophe.

But the building’s primary purpose would not be to protect humans from toxic radiation amid nuclear war. Rather, the fortified skyscraper would safeguard powerful computers, cables, and switchboards. It would house one of the most important telecommunications hubs in the United States…

Looking at the building, it requires very little imagination to picture it as the lair of villainous activity. Laura Poitras’s short film Project X basically consists of a voiceover of someone reading an NSA manual, some ominous background music, and shots of 33 Thomas Street looming in its oh-so-loomy way.

A top-secret handbook takes viewers on an undercover journey to Titanpointe, the site of a hidden partnership. Narrated by Rami Malek and Michelle Williams, and based on classified NSA documents, Project X reveals the inner workings of a windowless skyscraper in downtown Manhattan.

Friday, March 3rd, 2017

Small steps

The new Clearleft website is live! Huzzah!

Many people have been working very hard on it and it’s all looking rather nice. But, as I said before, the site launch isn’t the end—it’s just the beginning.

There are some obvious next steps: fixing bugs, adding content, tweaking copy, and, oh yeah, that whole “testing with real users” thing. But there’s also an opportunity to have some fun on the front end. Now that the site is out there in the wild, there’s a real incentive to improve its performance.

Off the top of my head, these are some areas where I think we can play around:

  • Font loading. Right now the site is just using @font-face. A smart font-loading strategy—at least for the body copy—could really help improve the perceived performance.
  • Responsive images. A long-term solution will require some wrangling on the back end, but I reckon we can come up with some way of generating different sized images to reference in srcset.
  • Service worker. It’s a no-brainer. Now that the Clearleft site is (finally!) running on HTTPS, having a simple service worker to cache static assets like CSS, JavaScript and some images seems like the obvious next step. The question is: what other offline shenanigans could we get up to?

I’m looking forward to tinkering with some of those technologies. Each one should make an incremental improvement to the site’s performance. There are already some steps on the back-end that are making a big difference: upgrading to PHP7 and using HTTP2.

Now the real fun begins.

Wednesday, February 22nd, 2017

Long betting

It has been exactly six years to the day since I instantiated this prediction:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

It is exactly five years to the day until the prediction condition resolves to a Boolean true or false.

If it resolves to true, The Bletchly Park Trust will receive $1000.

If it resolves to false, The Internet Archive will receive $1000.

Much as I would like Bletchley Park to get the cash, I’m hoping to lose this bet. I don’t want my pessimism about URL longevity to be rewarded.

So, to recap, the bet was placed on

02011-02-22

It is currently

02017-02-22

And the bet times out on

02022-02-22.

Tuesday, February 21st, 2017

Variable fonts

We have a tradition here at Clearleft of having the occasional lunchtime braindump. They’re somewhat sporadic, but it’s always a good day when there’s a “brown bag” gathering.

When Google’s AMP format came out and I had done some investigating, I led a brown bag playback on that. Recently Mark did one on Fractal so that everyone knew how work on that was progressing.

Today Richard gave us a quick brown bag talk on variable web fonts. He talked us through how these will work on the web and in operating systems. We got a good explanation of how these fonts would get designed—the type designer designs the “extreme” edges of size, weight, or whatever, and then the file format itself can extrapolate all the in-between stages. So, in theory, one single font file can hold hundreds, thousands, or hundreds of thousands of potential variations. It feels like switching from bitmap images to SVG—there’s suddenly much greater flexibility.

A variable font is a single font file that behaves like multiple fonts.

There were a couple of interesting tidbits that Rich pointed out…

While this is a new file format, there isn’t going to be a new file extension. These will be .ttf files, and so by extension, they can be .woff and .woff2 files too.

This isn’t some proposed theoretical standard: an unprecedented amount of co-operation has gone into the creation of this format. Adobe, Apple, Google, and Microsoft have all contributed. Agreement is the hardest part of any standards process. Once that’s taken care of, the technical solution follows quickly. So you can expect this to land very quickly and widely.

This technology is landing in web browsers before it lands in operating systems. It’s already available in the Safari Technology Preview. That means that for a while, the very best on-screen typography will be delivered not in eBook readers, but in web browsers. So if you want to deliver the absolute best reading experience, look to the web.

And here’s the part that I found fascinating…

We can currently use numbers for the font-weight property in CSS. Those number values increment in hundreds: 100, 200, 300, etc. Now with variable fonts, we can start using integers: 321, 417, 183, etc. How fortuitous that we have 99 free slots between our current set of values!

Well, that’s no accident. The reason why the numbers were originally specced in increments of 100 back in 1996 was precisely so that some future sci-fi technology could make use of the ranges in between. That’s some future-friendly thinking! And as Håkon wrote:

One of the reasons we chose to use three-digit numbers was to support intermediate values in the future. And the future is now :)

Needless to say, variable fonts will be covered in Richard’s forthcoming book.

Monday, February 20th, 2017

Amber

I really enjoyed teaching in Porto last week. It was like having a week-long series of CodeBar sessions.

Whenever I’m teaching at CodeBar, I like to be paired up with people who are just starting out. There’s something about explaining the web and HTML from first principles that I really like. And people often have lots and lots of questions that I enjoy answering (if I can). At CodeBar—and at The New Digital School—I found myself saying “Great question!” multiple times. The really great questions are the ones that I respond to with “I don’t know …let’s find out!”

CodeBar is always a very rewarding experience for me. It has given me the opportunity to try teaching. And having tried it, I can now safely say that I like it. It’s also a great chance to meet people from all walks of life. It gets me out of my bubble.

I can’t remember when I was first paired up with Amber at CodeBar. It must have been sometime last year. I do remember that she had lots of great questions—at some point I found myself explaining how hexadecimal colours work.

I was impressed with Amber’s eagerness to learn. I also liked that she was making her own website. I told her about Homebrew Website Club and she started coming along to that (along with other CodeBar people like Cassie and Alice).

I’ve mentioned to multiple CodeBar students that there’s pretty much an open-door policy at Clearleft when it comes to shadowing: feel free to come along and sit with a front-end developer while they’re working on client projects. A few people have taken up the offer and enjoyed observing myself or Charlotte at work. Amber was one of those people. Again, I was very impressed with her drive. She’s got a full-time job (with sometimes-crazy hours) but she’s so determined to get into the world of web design and development that she’s willing to spend her free time visiting Clearleft to soak up the atmosphere of a design studio.

We’ve decided to turn this into something more structured. Amber and I will get together for a couple of hours once a week. She’s given me a list of some of the areas she wants to explore, and I think it’s a fine-looking list:

  • I want to gather base, structural knowledge about the web and all related aspects. Things seem to float around in a big cloud at the moment.
  • I want to adhere to best practices.
  • I want to learn more about what direction I want to go in, find a niche.
  • I’d love to opportunity to chat with the brilliant people who work at Clearleft and gain a broad range of knowledge from them.

My plan right now is to take a two-track approach: one track about the theory, and another track about the practicalities. The practicalities will be HTML, CSS, JavaScript, and related technologies. The theory will be about understanding the history of the web and its strengths and weaknesses as a medium. And I want to make sure there’s plenty of UX, research, information architecture and content strategy covered too.

Seeing as we’ll only have a couple of hours every week, this won’t be quite like the masterclass I just finished up in Porto. Instead I imagine I’ll be laying some groundwork and then pointing to topics to research. I guess it’s a kind of homework. For example, after we talked today, I set Amber this little bit of research for the next time we meet: “What is the difference between the internet and the World Wide Web?”

I’m excited to see where this will lead. I find Amber’s drive and enthusiasm very inspiring. I also feel a certain weight of responsibility—I don’t want to enter into this lightly.

I’m not really sure what to call this though. Is it mentorship? Or is it coaching? Or training? All of the above?

Whatever it is, I’m looking forward to documenting the journey. Amber will be writing about it too. She is already demonstrating a way with words.

Friday, February 17th, 2017

Teaching in Porto, day five

For the final day of the week-long masterclass, I had no agenda. This was a time for the students to work on their own projects, but I was there to answer any remaining questions they might have.

As I suspected, the people with the most interest and experience in development were the ones with plenty of questions. I was more than happy to answer them. With no specific schedule for the day, we were free to merrily go chasing down rabbit holes.

SVG? Sure, I’d be happy to talk about that. More JavaScript? My pleasure! Databases? Not really my area of expertise, but I’m more than willing to share what I know.

It was a fun day. The centrepiece was a most excellent lunch across the river at a really traditional seafood place.

At the very end of the day, after everyone else had gone, I sat down with Tiago to discuss how the week went. Overall, I was happy. I was nervous going into this masterclass—I had never done a whole week of teaching—but based on the feedback I got, I think I did okay. There were times when I got impatient, and I wish I could turn back the clock and erase those moments. I noticed that those moments tended to occur when it was time for hands-on-keyboards coding: “no, not like that—like this!” I need to get better at handling those situations. But when we working on paper, or having stand-up discussions, or when I was just geeking out on a particular topic, everything felt quite positive.

All in all, this week has been a great experience. I know it sounds like a cliché, but I felt it was a real honour and a privilege to be involved with the New Digital School. I’ve enjoyed doing hands-on teaching, and I’d like to do more of it.

Teaching in Porto, day four

Day one covered HTML (amongst other things), day two covered CSS, and day three covered JavaScript. Each one of those days involved a certain amount of hands-on coding, with the students getting their hands dirty with angle brackets, curly braces, and semi-colons.

Day four was a deliberate step away from all that. No more laptops, just paper. Whereas the previous days had focused on collaboratively working on a single document, today I wanted everyone to work on a separate site.

The sites were generated randomly. I made five cards with types of sites on them: news, social network, shopping, travel, and learning. Another five cards had subjects: books, music, food, pets, and cars. And another five cards had audiences: students, parents, the elderly, commuters, and teachers. Everyone was dealt a random card from each deck, resulting in briefs like “a travel site about food for the elderly” or “a social network about music for commuters.”

For a bit of fun, the first brainstorming exercise (run as a 6-up) was to come with potential names for this service—4 minutes for 6 ideas. Then we went around the table, shared the ideas, got feedback, and settled on the names.

Now I asked everyone to come up with a one-sentence mission statement for their newly-named service. This was a good way of teasing out the most important verbs and nouns, which led nicely into the next task: answering the question “what is the core functionality?”

If that sounds familiar, it’s because it’s the first part of the three-step process I outlined in Resilient Web Design:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

We did some URL design, figuring out what structures would make sense for straightforward GET requests, like:

  • /things
  • /things/ID

Then, once it was clear what the primary “thing” was (a car, a book, etc.), I asked them to write down all the pieces that might appear on such a page; one post-it note per item e.g. “title”, “description”, “img”, “rating”, etc.

The next step involved prioritisation. They took those post-it notes and put them on the wall, but they had to put them in a vertical line from top to bottom in decreasing order of importance. This can be a challenge, but it’s better to solve these problems now rather than later.

Okay. I know asked them to “mark up” those vertical lists of post-it notes: writing HTML tag names by each one. By doing this before doing any visual design, it meant they were thinking about the meaning of the content first.

After that, we did a good ol’ fashioned classic 6-up sketching exercise, followed by critique (including a “designated dissenter” for each round). At this point, I was encouraging them to go crazy with ideas—they already had the core functionality figured out (with plain ol’ client/server requests and responses) so they could all the bells and whistles they wanted on top of that.

We finished up with a discussion of some of those bells and whistles, and how they could be used to improve the user experience: Ajax, geolocation, service workers, notifications, background sync …the sky’s the limit.

It was a whirlwind tour for just one day but I think it helped emphasise the importance of thinking about the fundamentals before adding enhancements.

This marked the end of the structured masterclass lessons. Tomorrow I’m around to answer any miscellaneous questions (if I can) and chat to the students individually while they work on their term projects.

Thursday, February 16th, 2017

Teaching in Porto, day three

Day two ended with a bit of a cliffhanger as I had the students mark up a document, but not yet style it. In the morning of day three, the styling began.

Rather than just treat “styling” as one big monolithic task, I broke it down into typography, colour, negative space, and so on. We time-boxed each one of those parts of the visual design. So everyone got, say, fifteen minutes to write styles relating to font families and sizes, then another fifteen minutes to write styles for colours and background colours. Bit by bit, the styles were layered on.

When it came to layout, we closed the laptops and returned to paper. Everyone did a quick round of 6-up sketching so that there was plenty of fast iteration on layout ideas. That was followed by some critique and dot-voting of the sketches.

Rather than diving into the CSS for layout—which can get quite complex—I instead walked through the approach for layout; namely putting all your layout styles inside media queries. To explain media queries, I first explained media types and then introduced the query part.

I felt pretty confident that I could skip over the nitty-gritty of media queries and cross-device layout because the next masterclass that will be taught at the New Digital School will be a week of responsive design, taught by Vitaly. I just gave them a taster—Vitaly can dive deeper.

By lunch time, I felt that we had covered CSS pretty well. After lunch it was time for the really challenging part: JavaScript.

The reason why I think JavaScript is challenging is that it’s inherently more complex than HTML or CSS. Those are declarative languages with fairly basic concepts at heart (elements, attributes, selectors, etc.), whereas an imperative language like JavaScript means entering the territory of logic, loops, variables, arrays, objects, and so on. I really didn’t want to get stuck in the weeds with that stuff.

I focused on the combination of JavaScript and the Document Object Model as a way of manipulating the HTML and CSS that’s already inside a browser. A lot of that boils down to this pattern:

When (some event happens), then (take this action).

We brainstormed some examples of this e.g. “When the user submits a form, then show a modal dialogue with an acknowledgement.” I then encouraged them to write a script …but I don’t mean a script in the JavaScript sense; I mean a script in the screenwriting or theatre sense. Line by line, write out each step that you want to accomplish. Once you’ve done that, translate each line of your English (or Portuguese) script into JavaScript.

I did quick demo as a proof of concept (which, much to my surprise, actually worked first time), but I was at pains to point out that they didn’t need to remember the syntax or vocabulary of the script; it was much more important to have a clear understanding of the thinking behind it.

With the remaining time left in the day, we ran through the many browser APIs available to JavaScript, from the relatively simple—like querySelector and Ajax—right up to the latest device APIs. I think I got the message across that, using JavaScript, there’s practically no limit to what you can do on the web these days …but the trick is to use that power responsibly.

At this point, we’ve had three days and we’ve covered three layers of web technologies: HTML, CSS, and JavaScript. Tomorrow we’ll take a step back from the nitty-gritty of the code. It’s going to be all about how to plan and think about building for the web before a single line of code gets written.

Wednesday, February 15th, 2017

Teaching in Porto, day two

The second day in this week-long masterclass was focused on CSS. But before we could get stuck into that, there were some diversions and tangents brought on by left-over questions from day one.

This was not a problem. Far from it! The questions were really good. Like, how does a web server know that someone has permission to carry out actions via a POST request? What a perfect opportunity to talk about state! Cue a little history lesson on the web’s beginning as a deliberately stateless medium, followed by the introduction of cookies …for good and ill.

We also had a digression about performance, file sizes, and loading times—something I’m always more than happy to discuss. But by mid-morning, we were back on track and ready to tackle CSS.

As with the first day, I wanted to take a “long zoom” look at design and the web. So instead of diving straight into stylesheets, we first looked at the history of visual design: cave paintings, hieroglyphs, illuminated manuscripts, the printing press, the Swiss school …all of them examples of media where the designer knows where the “edges” of the canvas lie. Not so with the web.

So to tackle visual design on the web, I suggested separating layout from all the other aspects of visual design: colour, typography, contrast, negative space, and so on.

At this point we were ready to start thinking in CSS. I started by pointing out that all CSS boils down to one pattern:

selector {
  property: value;
}

The trick, then, is to convert what you want into that pattern. So “I want the body of the page to be off-white with dark grey text” in English is translated into the CSS:

body {
  background-color: rgb(225,225,255);
  color: rgb(51,51,51);
}

…and so one for type, contrast, hierarchy, and more.

We started applying styles to the content we had collectively marked up with post-it notes on day one. Then the students split into groups of two to create an HTML document each. Tomorrow they’ll be styling that document.

There were two important links that come up over the course of day two:

  1. A Dao Of Web Design by John Allsopp, and
  2. The CSS Zen Garden.

If all goes according to plan, we’ll be tackling the third layer of the web technology stack tomorrow: JavaScript.

Monday, February 13th, 2017

Teaching in Porto, day one

Today was the first day of the week long “masterclass” I’m leading here at The New Digital School in Porto.

When I was putting together my stab-in-the-dark attempt to provide an outline for the week, I labelled day one as “How the web works” and gave this synopsis:

The internet and the web; how browsers work; a history of visual design on the web; the evolution of HTML and CSS.

There ended up being less about the history of visual design and CSS (we’ll cover that tomorrow) and more about the infrastructure that the web sits upon. Before diving into the way the web works, I thought it would be good to talk about how the internet works, which led me back to the history of communication networks in general. So the day started from cave drawings and smoke signals, leading to trade networks, then the postal system, before getting to the telegraph, and then telephone networks, the ARPANET, and eventually the internet. By lunch time we had just arrived at the birth of the World Wide Web at CERN.

It wasn’t all talk though. To demonstrate a hub-and-spoke network architecture I had everyone write down someone else’s name on a post-it note, then stand in a circle around me, and pass me (the hub) those messages to relay to their intended receiver. Later we repeated this exercise but with a packet-switching model: everyone could pass a note to their left or to their right. The hub-and-spoke system took almost a minute to relay all six messages; the packet-switching version took less than 10 seconds.

Over the course of the day, three different laws came up that were relevant to the history of the internet and the web:

Metcalfe’s Law
The value of a network is proportional to the square of the number of users.
Postel’s Law
Be conservative in what you send, be liberal in what you accept.
Sturgeon’s Law
Ninety percent of everything is crap.

There were also references to the giants of hypertext: Ted Nelson, Vannevar Bush, and Douglas Engelbart—for a while, I had the mother of all demos playing silently in the background.

After a most-excellent lunch in a nearby local restaurant (where I can highly recommend the tripe), we started on the building blocks of the web: HTTP, URLs, and HTML. I pulled up the first ever web page so that we could examine its markup and dive into the wonder of the A element. That led us to the first version of HTML which gave us enough vocabulary to start marking up documents: p, h1-h6, ol, ul, li, and a few others. We went around the room looking at posters and other documents pinned to the wall, and starting marking them up by slapping on post-it notes with opening and closing tags on them.

At this point we had covered the anatomy of an HTML element (opening tags, closing tags, attribute names and attribute values) as well as some of the history of HTML’s expanding vocabulary, including elements added in HTML5 like section, article, and nav. But so far everything was to do with marking up static content in a document. Stepping back a bit, we returned to HTTP, and talked about difference between GET and POST requests. That led in to ways of sending data to a server, which led to form fields and the many types of inputs at our disposal: text, password, radio, checkbox, email, url, tel, datetime, color, range, and more.

With that, the day drew to a close. I feel pretty good about what we covered. There was a lot of groundwork, and plenty of history, but also plenty of practical information about how browsers interpret HTML.

With the structural building blocks of the web in place, tomorrow is going to focus more on the design side of things.

Sunday, February 12th, 2017

From New York to Porto

February is shaping up to be a busy travel month. I’ve just come back from spending a week in New York as part of a ten-strong Clearleft expedition to this year’s Interaction conference.

There were some really good talks at the event, but alas, the muti-track format made it difficult to see all of them. Continuous partial FOMO was the order of the day. Still, getting to see Christina Xu and Brenda Laurel made it all worthwhile.

To be honest, the conference was only part of the motivation for the trip. Spending a week in New York with a gaggle of Clearlefties was its own reward. We timed it pretty well, being there for the Superb Owl, and for a seasonal snowstorm. A winter trip to New York just wouldn’t be complete without a snowball fight in Central Park.

Funnily enough, I’m going to back in New York in just three weeks’ time for AMP conf at the start of March. I’ve been invited along to be the voice of dissent on a panel—a brave move by the AMP team. I wonder if they know what they’re letting themselves in for.

Before that though, I’m off to Porto for a week. I’ll be teaching at the New Digital School, running a masterclass in progressive enhancement:

In this masterclass we’ll dive into progressive enhancement, a layered approach to building for the web that ensures access for all. Content, structure, presentation, and behaviour are each added in a careful, well-thought out way that makes the end result more resilient to the inherent variability of the web.

I must admit I’ve got a serious case of imposter syndrome about this. A full week of teaching—I mean, who am I to teach anything? I’m hoping that my worries and nervousness will fall by the wayside once I start geeking out with the students about all things web. I’ve sorta kinda got an outline of what I want to cover during the week, but for the most part, I’m winging it.

I’ll try to document the week as it progresses. And you can certainly expect plenty of pictures of seafood and port wine.

Tuesday, February 7th, 2017

Audio book

I’ve recorded each chapter of Resilient Web Design as MP3 files that I’ve been releasing once a week. The final chapter is recorded and released so my audio work is done here.

If you want subscribe to the podcast, pop this RSS feed into your podcast software of choice. Or use one of these links:

Or if you can have it as one single MP3 file to listen to as an audio book. It’s two hours long.

So, for those keeping count, the book is now available as HTML, PDF, EPUB, MOBI, and MP3.

Monday, January 23rd, 2017

Charlotte

Over the eleven-year (and counting) lifespan of Clearleft, people have come and gone—great people like Nat, Andy, Paul and many more. It’s always a bittersweet feeling. On the one hand, I know I’ll miss having them around, but on the other hand, I totally get why they’d want to try their hand at something different.

It was Charlotte’s last day at Clearleft last Friday. Her husband Tom is being relocated to work in Sydney, which is quite the exciting opportunity for both of them. Charlotte’s already set up with a job at Atlassian—they’re very lucky to have her.

So once again there’s the excitement of seeing someone set out on a new adventure. But this one feels particularly bittersweet to me. Charlotte wasn’t just a co-worker. For a while there, I was her teacher …or coach …or mentor …I’m not really sure what to call it. I wrote about the first year of learning and how it wasn’t just a learning experience for Charlotte, it was very much a learning experience for me.

For the last year though, there’s been less and less of that direct transfer of skills and knowledge. Charlotte is definitely not a “junior” developer any more (whatever that means), which is really good but it’s left a bit of a gap for me when it comes to finding fulfilment.

Just last week I was checking in with Charlotte at the end of a long day she had spent tirelessly working on the new Clearleft site. Mostly I was making sure that she was going to go home and not stay late (something that had happened the week before which I wanted to nip in the bud—that’s not how we do things ‘round here). She was working on a particularly gnarly cross-browser issue and I ended up sitting with her, trying to help work through it. At the end, I remember thinking “I’ve missed this.”

It hasn’t been all about HTML, CSS, and JavaScript. Charlotte really pushed herself to become a public speaker. I did everything I could to support that—offering advice, giving feedback and encouragement—but in the end, it was all down to her.

I can’t describe the immense swell of pride I felt when Charlotte spoke on stage. Watching her deliver her talk at Dot York was one my highlights of the year.

Thinking about it, this is probably the perfect time for Charlotte to leave the Clearleft nest. After all, I’m not sure there’s anything more I can teach her. But this feels like a particularly sad parting, maybe because she’s going all the way to Australia and not, y’know, starting a new job in London.

In our final one-to-one, my stiff upper lip may have had a slight wobble as I told Charlotte what I thought was her greatest strength. It wasn’t her work ethic (which is incredibly strong), and it wasn’t her CSS skills (‘though she is now an absolute wizard). No, her greatest strength, in my opinion, is her kindness.

I saw her kindness in how she behaved with her colleagues, her peers, and of course in all the fantastic work she’s done at Codebar Brighton.

I’m going to miss her.

Thursday, January 19th, 2017

Looking beyond launch

It’s all go, go, go at Clearleft while we’re working on a new version of our website …accompanied by a brand new identity. It’s an exciting time in the studio, tinged with the slight stress that comes with any kind of unveiling like this.

I think it’s good to remember that this is the web. I keep telling myself that we’re not unveiling something carved in stone. Even after the launch we can keep making the site better. In fact, if we wait until everything is perfect before we launch, we’ll probably never launch at all.

On the other hand, you only get one chance to make a first impression, right? So it’s got to be good …but it doesn’t have to be done. A website is never done.

I’ve got to get comfortable with that. There’s lots of things that I’d like to be done in time for launch, but realistically it’s fine if those things are completed in the subsequent days or weeks.

Adding a service worker and making a nice offline experience? I really want to do that …but it can wait.

What about other performance tweaks? Yes, we’ll to try have every asset—images, fonts—optimised …but maybe not from day one.

Making sure that each page has good metadata—Open Graph? Twitter Cards? Microformats? Maybe even AMP? Sure …but not just yet.

Having gorgeous animations? Again, I really want to have them but as Val rightly points out, animations are an enhancement—a really, really great enhancement.

If anything, putting the site live before doing all these things acts as an incentive to make sure they get done.

So when you see the new site, if you view source or run it through Web Page Test and spot areas for improvement, rest assured we’re on it.

Wednesday, January 11th, 2017

Making Resilient Web Design work offline

I’ve written before about taking an online book offline, documenting the process behind the web version of HTML5 For Web Designers. A book is quite a static thing so it’s safe to take a fairly aggressive offline-first approach. In fact, a static unchanging book is one of the few situations that AppCache works for. Of course a service worker is better, but until AppCache is removed from browsers (and until service worker is supported across the board), I’m using both. I wouldn’t recommend that for most sites though—for most sites, use a service worker to enhance it, and avoid AppCache like the plague.

For Resilient Web Design, I took a similar approach to HTML5 For Web Designers but I knew that there was a good chance that some of the content would be getting tweaked at least for a while. So while the approach is still cache-first, I decided to keep the cache fairly fresh.

Here’s my service worker. It starts with the usual stuff: when the service worker is installed, there’s a list of static assets to cache. In this case, that list is literally everything; all the HTML, CSS, JavaScript, and images for the whole site. Again, this is a pattern that works well for a book, but wouldn’t be right for other kinds of websites.

The real heavy lifting happens with the fetch event. This is where the logic sits for what the service worker should do everytime there’s a request for a resource. I’ve documented the logic with comments:

// Look in the cache first, fall back to the network
  // CACHE
  // Did we find the file in the cache?
      // If so, fetch a fresh copy from the network in the background
      // NETWORK
          // Stash the fresh copy in the cache
  // NETWORK
  // If the file wasn't in the cache, make a network request
      // Stash a fresh copy in the cache in the background
  // OFFLINE
  // If the request is for an image, show an offline placeholder
  // If the request is for a page, show an offline message

So my order of preference is:

  1. Try the cache first,
  2. Try the network second,
  3. Fallback to a placeholder as a last resort.

Leaving aside that third part, regardless of whether the response is served straight from the cache or from the network, the cache gets a top-up. If the response is being served from the cache, there’s an additional network request made to get a fresh copy of the resource that was just served. This means that the user might be seeing a slightly stale version of a file, but they’ll get the fresher version next time round.

Again, I think this acceptable for a book where the tweaks and changes should be fairly minor, but I definitely wouldn’t want to do it on a more dynamic site where the freshness matters more.

Here’s what it usually likes like when a file is served up from the cache:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      return responseFromCache;
  }

I’ve introduced an extra step where the fresher version is fetched from the network. This is where the code can look a bit confusing: the network request is happening in the background after the cached file has already been returned, but the code appears before the return statement:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      // If so, fetch a fresh copy from the network in the background
      event.waitUntil(
          // NETWORK
          fetch(request)
          .then( responseFromFetch => {
              // Stash the fresh copy in the cache
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      return responseFromCache;
  }

It’s asynchronous, see? So even though all that network code appears before the return statement, it’s pretty much guaranteed to complete after the cache response has been returned. You can verify this by putting in some console.log statements:

caches.match(request)
.then( responseFromCache => {
  if (responseFromCache) {
      event.waitUntil(
          fetch(request)
          .then( responseFromFetch => {
              console.log('Got a response from the network.');
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      console.log('Got a response from the cache.');
      return responseFromCache;
  }

Those log statements will appear in this order:

Got a response from the cache.
Got a response from the network.

That’s the opposite order in which they appear in the code. Everything inside the event.waitUntil part is asynchronous.

Here’s the catch: this kind of asynchronous waitUntil hasn’t landed in all the browsers yet. The code I’ve written will fail.

But never fear! Jake has written a polyfill. All I need to do is include that at the start of my serviceworker.js file and I’m good to go:

// Import Jake's polyfill for async waitUntil
importScripts('/js/async-waituntil.js');

I’m also using it when a file isn’t found in the cache, and is returned from the network instead. Here’s what the usual network code looks like:

fetch(request)
  .then( responseFromFetch => {
    return responseFromFetch;
  })

I want to also store that response in the cache, but I want to do it asynchronously—I don’t care how long it takes to put the file in the cache as long as the user gets the response straight away.

Technically, I’m not putting the response in the cache; I’m putting a copy of the response in the cache (it’s a stream, so I need to clone it if I want to do more than one thing with it).

fetch(request)
  .then( responseFromFetch => {
    // Stash a fresh copy in the cache in the background
    let responseCopy = responseFromFetch.clone();
    event.waitUntil(
      caches.open(staticCacheName)
      .then( cache => {
          cache.put(request, responseCopy);
      })
    );
    return responseFromFetch;
  })

That all seems to be working well in browsers that support service workers. For legacy browsers, like Mobile Safari, there’s the much blunter caveman logic of an AppCache manifest.

Here’s the JavaScript that decides whether a browser gets the service worker or the AppCache:

if ('serviceWorker' in navigator) {
  // If service workers are supported
  navigator.serviceWorker.register('/serviceworker.js');
} else if ('applicationCache' in window) {
  // Otherwise inject an iframe to use appcache
  var iframe = document.createElement('iframe');
  iframe.setAttribute('src', '/appcache.html');
  iframe.setAttribute('style', 'width: 0; height: 0; border: 0');
  document.querySelector('footer').appendChild(iframe);
}

Either way, people are making full use of the offline nature of the book and that makes me very happy indeed.

Thursday, January 5th, 2017

Going rogue

As soon as tickets were available for the Brighton premiere of Rogue One, I grabbed some—two front-row seats for one minute past midnight on December 15th. No problem. That was the night after the Clearleft end-of-year party on December 14th.

Then I realised how dates work. One minute past midnight on December 15th is the same night as December 14th. I had double-booked myself.

It’s a nice dilemma to have; party or Star Wars? I decided to absolve myself of the decision by buying additional tickets for an evening showing on December 15th. That way, I wouldn’t feel like I had to run out of the Clearleft party before midnight, like some geek Cinderella.

In the end though, I did end up running out of the Clearleft party. I had danced and quaffed my fill, things were starting to get messy, and frankly, I was itching to immerse myself in the newest Star Wars film ever since Graham strapped a VR headset on me earlier in the day and let me fly a virtual X-wing.

Getting in the mood for Rogue One with the Star Wars Battlefront X-Wing VR mission—invigorating! (and only slightly queasy-making)

So, somewhat tired and slightly inebriated, I strapped in for the midnight screening of Rogue One: A Star Wars Story.

I thought it was okay. Some of the fan service scenes really stuck out, and not in a good way. On the whole, I just wasn’t that gripped by the story. Ah, well.

Still, the next evening, I had those extra tickets I had bought as psychological insurance. “Why not?” I thought, and popped along to see it again.

This time, I loved it. It wasn’t just me either. Jessica was equally indifferent the first time ‘round, and she also enjoyed it way more the second time.

I can’t recall having such a dramatic swing in my appraisal of a film from one viewing to the next. I’m not quite sure why it didn’t resonate the first time. Maybe I was just too tired. Maybe I was overthinking it too much, unable to let myself get caught up in the story because I was over-analysing it as a new Star Wars film. Anyway, I’m glad that I like it now.

Much has been made of its similarity to classic World War Two films, which I thought worked really well. But the aspect of the film that I found most thought-provoking was the story of Galen Erso. It’s the classic tale of an apparently good person reluctantly working in service to evil ends.

This reminded me of Mother Night, perhaps my favourite Kurt Vonnegut book (although, let’s face it, many of his books are interchangeable—you could put one down halfway through, and pick another one up, and just keep reading). Mother Night gives the backstory of Howard W. Campbell, who appears as a character in Slaughterhouse Five. In the introduction, Vonnegut states that it’s the one story of his with a moral:

We are what we pretend to be, so we must be careful about what we pretend to be.

If Galen Erso is pretending to work for the Empire, is there any difference to actually working for the Empire? In this case, there’s a get-out clause for this moral dilemma: by sabotaging the work (albeit very, very subtly) Galen’s soul appears to be absolved of sin. That’s the conclusion of the excellent post on the Sci-fi Policy blog, Rogue One: an ‘Engineering Ethics’ Story:

What Galen Erso does is not simply watch a system be built and then whistleblow; he actively shaped the design from its earliest stages considering its ultimate societal impacts. These early design decisions are proactive rather than reactive, which is part of the broader engineering ethics lesson of Rogue One.

I know I’m Godwinning myself with the WWII comparisons, but there are some obvious historical precedents for Erso’s dilemma. The New York Review of Books has an in-depth look at Werner Heisenberg and his “did he/didn’t he?” legacy with Germany’s stalled atom bomb project. One generous reading of his actions is that he kept the project going in order to keep scientists from being sent to the front, but made sure that the project was never ambitious enough to actually achieve destructive ends:

What the letters reveal are glimpses of Heisenberg’s inner life, like the depth of his relief after the meeting with Speer, reassured that things could safely tick along as they were; his deep unhappiness over his failure to explain to Bohr how the German scientists were trying to keep young physicists out of the army while still limiting uranium research work to a reactor, while not pursuing a fission bomb; his care in deciding who among friends and acquaintances could be trusted.

Speaking of Albert Speer, are his hands are clean or dirty? And in the case of either answer, is it because of moral judgement or sheer ignorance? The New Atlantis dives deep into this question in Roger Forsgren’s article The Architecture of Evil:

Speer indeed asserted that his real crime was ambition — that he did what almost any other architect would have done in his place. He also admitted some responsibility, noting, for example, that he had opposed the use of forced labor only when it seemed tactically unsound, and that “it added to my culpability that I had raised no humane and ethical considerations in these cases.” His contrition helped to distance himself from the crude and unrepentant Nazis standing trial with him, and this along with his contrasting personal charm permitted him to be known as the “good Nazi” in the Western press. While many other Nazi officials were hanged for their crimes, the court favorably viewed Speer’s initiative to prevent Hitler’s scorched-earth policy and sentenced him to twenty years’ imprisonment.

I wish that these kinds of questions only applied to the past, but they are all-too relevant today.

Software engineers in the United States are signing a pledge not to participate in the building of a Muslim registry:

We refuse to participate in the creation of databases of identifying information for the United States government to target individuals based on race, religion, or national origin.

That’s all well and good, but it might be that a dedicated registry won’t be necessary if those same engineers are happily contributing their talents to organisations whose business models are based on the ability to track and target people.

But now we’re into slippery slopes and glass houses. One person might draw the line at creating a Muslim registry. Someone else might draw the line at including any kind of invasive tracking script on a website. Someone else again might decide that the line is crossed by including Google Analytics. It’s moral relativism all the way down. But that doesn’t mean we shouldn’t draw lines. Of course it’s hard to live in an ideal state of ethical purity—from the clothes we wear to the food we eat to the electricity we use—but a muddy battleground is still capable of having a line drawn through it.

The question facing the fictional characters Galen Erso and Howard W. Campbell (and the historical figures of Werner Heisenberg and Albert Speer) is this: can I accomplish less evil by working within a morally repugnant system than being outside of it? I’m sure it’s the same question that talented designers ask themselves before taking a job at Facebook.

At one point in Rogue One, Galen Erso explicitly invokes the justification that they’d find someone else to do this work anyway. It sounds a lot like Tim Cook’s memo to Apple staff justifying his presence at a roundtable gathering that legitimised the election of a misogynist bigot to the highest office in the land. I’m sure that Tim Cook, Elon Musk, Jeff Bezos, and Sheryl Sandberg all think they are playing the part of Galen Erso but I wonder if they’ll soon find themselves indistinguishable from Orson Krennic.