Tags: medium

118

sparkline

Looking beyond launch

It’s all go, go, go at Clearleft while we’re working on a new version of our website …accompanied by a brand new identity. It’s an exciting time in the studio, tinged with the slight stress that comes with any kind of unveiling like this.

I think it’s good to remember that this is the web. I keep telling myself that we’re not unveiling something carved in stone. Even after the launch we can keep making the site better. In fact, if we wait until everything is perfect before we launch, we’ll probably never launch at all.

On the other hand, you only get one chance to make a first impression, right? So it’s got to be good …but it doesn’t have to be done. A website is never done.

I’ve got to get comfortable with that. There’s lots of things that I’d like to be done in time for launch, but realistically it’s fine if those things are completed in the subsequent days or weeks.

Adding a service worker and making a nice offline experience? I really want to do that …but it can wait.

What about other performance tweaks? Yes, we’ll to try have every asset—images, fonts—optimised …but maybe not from day one.

Making sure that each page has good metadata—Open Graph? Twitter Cards? Microformats? Maybe even AMP? Sure …but not just yet.

Having gorgeous animations? Again, I really want to have them but as Val rightly points out, animations are an enhancement—a really, really great enhancement.

If anything, putting the site live before doing all these things acts as an incentive to make sure they get done.

So when you see the new site, if you view source or run it through Web Page Test and spot areas for improvement, rest assured we’re on it.

Making Resilient Web Design work offline

I’ve written before about taking an online book offline, documenting the process behind the web version of HTML5 For Web Designers. A book is quite a static thing so it’s safe to take a fairly aggressive offline-first approach. In fact, a static unchanging book is one of the few situations that AppCache works for. Of course a service worker is better, but until AppCache is removed from browsers (and until service worker is supported across the board), I’m using both. I wouldn’t recommend that for most sites though—for most sites, use a service worker to enhance it, and avoid AppCache like the plague.

For Resilient Web Design, I took a similar approach to HTML5 For Web Designers but I knew that there was a good chance that some of the content would be getting tweaked at least for a while. So while the approach is still cache-first, I decided to keep the cache fairly fresh.

Here’s my service worker. It starts with the usual stuff: when the service worker is installed, there’s a list of static assets to cache. In this case, that list is literally everything; all the HTML, CSS, JavaScript, and images for the whole site. Again, this is a pattern that works well for a book, but wouldn’t be right for other kinds of websites.

The real heavy lifting happens with the fetch event. This is where the logic sits for what the service worker should do everytime there’s a request for a resource. I’ve documented the logic with comments:

// Look in the cache first, fall back to the network
  // CACHE
  // Did we find the file in the cache?
      // If so, fetch a fresh copy from the network in the background
      // NETWORK
          // Stash the fresh copy in the cache
  // NETWORK
  // If the file wasn't in the cache, make a network request
      // Stash a fresh copy in the cache in the background
  // OFFLINE
  // If the request is for an image, show an offline placeholder
  // If the request is for a page, show an offline message

So my order of preference is:

  1. Try the cache first,
  2. Try the network second,
  3. Fallback to a placeholder as a last resort.

Leaving aside that third part, regardless of whether the response is served straight from the cache or from the network, the cache gets a top-up. If the response is being served from the cache, there’s an additional network request made to get a fresh copy of the resource that was just served. This means that the user might be seeing a slightly stale version of a file, but they’ll get the fresher version next time round.

Again, I think this acceptable for a book where the tweaks and changes should be fairly minor, but I definitely wouldn’t want to do it on a more dynamic site where the freshness matters more.

Here’s what it usually likes like when a file is served up from the cache:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      return responseFromCache;
  }

I’ve introduced an extra step where the fresher version is fetched from the network. This is where the code can look a bit confusing: the network request is happening in the background after the cached file has already been returned, but the code appears before the return statement:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      // If so, fetch a fresh copy from the network in the background
      event.waitUntil(
          // NETWORK
          fetch(request)
          .then( responseFromFetch => {
              // Stash the fresh copy in the cache
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      return responseFromCache;
  }

It’s asynchronous, see? So even though all that network code appears before the return statement, it’s pretty much guaranteed to complete after the cache response has been returned. You can verify this by putting in some console.log statements:

caches.match(request)
.then( responseFromCache => {
  if (responseFromCache) {
      event.waitUntil(
          fetch(request)
          .then( responseFromFetch => {
              console.log('Got a response from the network.');
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      console.log('Got a response from the cache.');
      return responseFromCache;
  }

Those log statements will appear in this order:

Got a response from the cache.
Got a response from the network.

That’s the opposite order in which they appear in the code. Everything inside the event.waitUntil part is asynchronous.

Here’s the catch: this kind of asynchronous waitUntil hasn’t landed in all the browsers yet. The code I’ve written will fail.

But never fear! Jake has written a polyfill. All I need to do is include that at the start of my serviceworker.js file and I’m good to go:

// Import Jake's polyfill for async waitUntil
importScripts('/js/async-waituntil.js');

I’m also using it when a file isn’t found in the cache, and is returned from the network instead. Here’s what the usual network code looks like:

fetch(request)
  .then( responseFromFetch => {
    return responseFromFetch;
  })

I want to also store that response in the cache, but I want to do it asynchronously—I don’t care how long it takes to put the file in the cache as long as the user gets the response straight away.

Technically, I’m not putting the response in the cache; I’m putting a copy of the response in the cache (it’s a stream, so I need to clone it if I want to do more than one thing with it).

fetch(request)
  .then( responseFromFetch => {
    // Stash a fresh copy in the cache in the background
    let responseCopy = responseFromFetch.clone();
    event.waitUntil(
      caches.open(staticCacheName)
      .then( cache => {
          cache.put(request, responseCopy);
      })
    );
    return responseFromFetch;
  })

That all seems to be working well in browsers that support service workers. For legacy browsers, like Mobile Safari, there’s the much blunter caveman logic of an AppCache manifest.

Here’s the JavaScript that decides whether a browser gets the service worker or the AppCache:

if ('serviceWorker' in navigator) {
  // If service workers are supported
  navigator.serviceWorker.register('/serviceworker.js');
} else if ('applicationCache' in window) {
  // Otherwise inject an iframe to use appcache
  var iframe = document.createElement('iframe');
  iframe.setAttribute('src', '/appcache.html');
  iframe.setAttribute('style', 'width: 0; height: 0; border: 0');
  document.querySelector('footer').appendChild(iframe);
}

Either way, people are making full use of the offline nature of the book and that makes me very happy indeed.

Going rogue

As soon as tickets were available for the Brighton premiere of Rogue One, I grabbed some—two front-row seats for one minute past midnight on December 15th. No problem. That was the night after the Clearleft end-of-year party on December 14th.

Then I realised how dates work. One minute past midnight on December 15th is the same night as December 14th. I had double-booked myself.

It’s a nice dilemma to have; party or Star Wars? I decided to absolve myself of the decision by buying additional tickets for an evening showing on December 15th. That way, I wouldn’t feel like I had to run out of the Clearleft party before midnight, like some geek Cinderella.

In the end though, I did end up running out of the Clearleft party. I had danced and quaffed my fill, things were starting to get messy, and frankly, I was itching to immerse myself in the newest Star Wars film ever since Graham strapped a VR headset on me earlier in the day and let me fly a virtual X-wing.

Getting in the mood for Rogue One with the Star Wars Battlefront X-Wing VR mission—invigorating! (and only slightly queasy-making)

So, somewhat tired and slightly inebriated, I strapped in for the midnight screening of Rogue One: A Star Wars Story.

I thought it was okay. Some of the fan service scenes really stuck out, and not in a good way. On the whole, I just wasn’t that gripped by the story. Ah, well.

Still, the next evening, I had those extra tickets I had bought as psychological insurance. “Why not?” I thought, and popped along to see it again.

This time, I loved it. It wasn’t just me either. Jessica was equally indifferent the first time ‘round, and she also enjoyed it way more the second time.

I can’t recall having such a dramatic swing in my appraisal of a film from one viewing to the next. I’m not quite sure why it didn’t resonate the first time. Maybe I was just too tired. Maybe I was overthinking it too much, unable to let myself get caught up in the story because I was over-analysing it as a new Star Wars film. Anyway, I’m glad that I like it now.

Much has been made of its similarity to classic World War Two films, which I thought worked really well. But the aspect of the film that I found most thought-provoking was the story of Galen Erso. It’s the classic tale of an apparently good person reluctantly working in service to evil ends.

This reminded me of Mother Night, perhaps my favourite Kurt Vonnegut book (although, let’s face it, many of his books are interchangeable—you could put one down halfway through, and pick another one up, and just keep reading). Mother Night gives the backstory of Howard W. Campbell, who appears as a character in Slaughterhouse Five. In the introduction, Vonnegut states that it’s the one story of his with a moral:

We are what we pretend to be, so we must be careful about what we pretend to be.

If Galen Erso is pretending to work for the Empire, is there any difference to actually working for the Empire? In this case, there’s a get-out clause for this moral dilemma: by sabotaging the work (albeit very, very subtly) Galen’s soul appears to be absolved of sin. That’s the conclusion of the excellent post on the Sci-fi Policy blog, Rogue One: an ‘Engineering Ethics’ Story:

What Galen Erso does is not simply watch a system be built and then whistleblow; he actively shaped the design from its earliest stages considering its ultimate societal impacts. These early design decisions are proactive rather than reactive, which is part of the broader engineering ethics lesson of Rogue One.

I know I’m Godwinning myself with the WWII comparisons, but there are some obvious historical precedents for Erso’s dilemma. The New York Review of Books has an in-depth look at Werner Heisenberg and his “did he/didn’t he?” legacy with Germany’s stalled atom bomb project. One generous reading of his actions is that he kept the project going in order to keep scientists from being sent to the front, but made sure that the project was never ambitious enough to actually achieve destructive ends:

What the letters reveal are glimpses of Heisenberg’s inner life, like the depth of his relief after the meeting with Speer, reassured that things could safely tick along as they were; his deep unhappiness over his failure to explain to Bohr how the German scientists were trying to keep young physicists out of the army while still limiting uranium research work to a reactor, while not pursuing a fission bomb; his care in deciding who among friends and acquaintances could be trusted.

Speaking of Albert Speer, are his hands are clean or dirty? And in the case of either answer, is it because of moral judgement or sheer ignorance? The New Atlantis dives deep into this question in Roger Forsgren’s article The Architecture of Evil:

Speer indeed asserted that his real crime was ambition — that he did what almost any other architect would have done in his place. He also admitted some responsibility, noting, for example, that he had opposed the use of forced labor only when it seemed tactically unsound, and that “it added to my culpability that I had raised no humane and ethical considerations in these cases.” His contrition helped to distance himself from the crude and unrepentant Nazis standing trial with him, and this along with his contrasting personal charm permitted him to be known as the “good Nazi” in the Western press. While many other Nazi officials were hanged for their crimes, the court favorably viewed Speer’s initiative to prevent Hitler’s scorched-earth policy and sentenced him to twenty years’ imprisonment.

I wish that these kinds of questions only applied to the past, but they are all-too relevant today.

Software engineers in the United Stares are signing a pledge not to participate in the building of a Muslim registry:

We refuse to participate in the creation of databases of identifying information for the United States government to target individuals based on race, religion, or national origin.

That’s all well and good, but it might be that a dedicated registry won’t be necessary if those same engineers are happily contributing their talents to organisations whose business models are based on the ability to track and target people.

But now we’re into slippery slopes and glass houses. One person might draw the line at creating a Muslim registry. Someone else might draw the line at including any kind of invasive tracking script on a website. Someone else again might decide that the line is crossed by including Google Analytics. It’s moral relativism all the way down. But that doesn’t mean we shouldn’t draw lines. Of course it’s hard to live in an ideal state of ethical purity—from the clothes we wear to the food we eat to the electricity we use—but a muddy battleground is still capable of having a line drawn through it.

The question facing the fictional characters Galen Erso and Howard W. Campbell (and the historical figures of Werner Heisenberg and Albert Speer) is this: can I accomplish less evil by working within a morally repugnant system than being outside of it? I’m sure it’s the same question that talented designers ask themselves before taking a job at Facebook.

At one point in Rogue One, Galen Erso explicitly invokes the justification that they’d find someone else to do this work anyway. It sounds a lot like Tim Cook’s memo to Apple staff justifying his presence at a roundtable gathering that legitimised the election of a misogynist bigot to the highest office in the land. I’m sure that Tim Cook, Elon Musk, Jeff Bezos, and Sheryl Sandberg all think they are playing the part of Galen Erso but I wonder if they’ll soon find themselves indistinguishable from Orson Krennic.

Contact

I left the office one evening a few weeks back, and while I was walking up the street, James Box cycled past, waving a hearty good evening to me. I didn’t see him at first. I was in a state of maximum distraction. For one thing, there was someone walking down the street with a magnificent Irish wolfhound. If that weren’t enough to dominate my brain, I also had headphones in my ears through which I was listening to an audio version of a TED talk by Donald Hoffman called Do we really see reality as it is?

It’s fascinating—if mind-bending—stuff. It sounds like the kind of thing that’s used to justify Deepak Chopra style adventures in la-la land, but Hoffman is deliberately taking a rigorous approach. He knows his claims are outrageous, but he welcomes all attempts to falsify his hypotheses.

I’m not noticing this just from a short TED talk. It’s been one of those strange examples of synchronicity where his work has been popping up on my radar multiple times. There’s an article in Quanta magazine that was also republished in The Atlantic. And there’s a really good interview on the You Are Not So Smart podcast that I huffduffed a while back.

But the most unexpected place that Hoffman popped up was when I was diving down a SETI (or METI) rabbit hole. There I was reading about the Cosmic Call project and Lincos when I came across this article: Why ‘Arrival’ Is Wrong About the Possibility of Talking with Space Aliens, with its subtitle “Human efforts to communicate with extraterrestrials are doomed to failure, expert says.” The expert in question pulling apart the numbers in the Drake equation turned out to be none other than Donald Hoffmann.

A few years ago, at a SETI Institute conference on interstellar communication, Hoffman appeared on the bill after a presentation by radio astronomer Frank Drake, who pioneered the search for alien civilizations in 1960. Drake showed the audience dozens of images that had been launched into space aboard NASA’s Voyager probes in the 1970s. Each picture was carefully chosen to be clearly and easily understood by other intelligent beings, he told the crowd.

After Drake spoke, Hoffman took the stage and “politely explained how every one of the images would be infinitely ambiguous to extraterrestrials,” he recalls.

I’m sure he’s quite right. But let’s face it, the Voyager golden record was never really about communicating with an alien intelligence …it was about how we present ourself.

Vertical limit

When I was first styling Resilient Web Design, I made heavy use of vh units. The vertical spacing between elements—headings, paragraphs, images—was all proportional to the overall viewport height. It looked great!

Then I tested it on real devices.

Here’s the problem: when a page loads up in a mobile browser—like, say, Chrome on an Android device—the URL bar is at the top of the screen. The height of that piece of the browser interface isn’t taken into account for the viewport height. That makes sense: the viewport height is the amount of screen real estate available for the content. The content doesn’t extend into the URL bar, therefore the height of the URL bar shouldn’t be part of the viewport height.

But then if you start scrolling down, the URL bar scrolls away off the top of the screen. So now it’s behaving as though it is part of the content rather than part of the browser interface. At this point, the value of the viewport height changes: now it’s the previous value plus the height of the URL bar that was previously there but which has now disappeared.

I totally understand why the URL bar is squirrelled away once the user starts scrolling—it frees up some valuable vertical space. But because that necessarily means recalculating the viewport height, it effectively makes the vh unit in CSS very, very limited in scope.

In my initial implementation of Resilient Web Design, the one where I was styling almost everything with vh, the site was unusable. Every time you started scrolling, things would jump around. I had to go back to the drawing board and remove almost all instances of vh from the styles.

I’ve left it in for one use case and I think it’s the most common use of vh: making an element take up exactly the height of the viewport. The front page of the web book uses min-height: 100vh for the title.

Scrolling.

But as soon as you scroll down from there, that element changes height. The content below it suddenly moves.

Let’s say the overall height of the browser window is 600 pixels, of which 50 pixels are taken up by the URL bar. When the page loads, 100vh is 550 pixels. But as soon as you scroll down and the URL bar floats away, the value of 100vh becomes 600 pixels.

(This also causes problems if you’re using vertical media queries. If you choose the wrong vertical breakpoint, then the media query won’t kick in when the page loads but will kick in once the user starts scrolling …or vice-versa.)

There’s a mixed message here. On the one hand, the browser is declaring that the URL bar is part of its interface; that the space is off-limits for content. But then, once scrolling starts, that is invalidated. Now the URL bar is behaving as though it is part of the content scrolling off the top of the viewport.

The result of this messiness is that the vh unit is practically useless for real-world situations with real-world devices. It works great for desktop browsers if you’re grabbing the browser window and resizing, but that’s not exactly a common scenario for anyone other than web developers.

I’m sure there’s a way of solving it with JavaScript but that feels like using an atomic bomb to crack a walnut—the whole point of having this in CSS is that we don’t need to use JavaScript for something related to styling.

It’s such a shame. A piece of CSS that’s great in theory, and is really well supported, just falls apart where it matters most.

Update: There’s a two-year old bug report on this for Chrome, and it looks like it might actually get fixed in February.

2016 reading list

I was having a think back over 2016, trying to remember which books I had read during the year. To the best of my recollection, I think that this is the final tally…

Non-fiction

  • Endurance by Alfred Lansing
  • The Rational Optimist by Matt Ridley
  • The Real World of Technology by Ursula Franklin
  • Design For Real Life by Eric Meyer and Sara Wachter-Boettcher
  • Practical SVG by Chris Coyier
  • Demystifying Public Speaking by Lara Hogan
  • Working The Command Line by Remy Sharp

Fiction

  • The Revenant by Michael Punke
  • The Adjacent by Christopher Priest
  • Helliconia Spring by Brian Aldiss
  • High Rise by J.G. Ballard
  • The Affirmation by Christopher Priest
  • Brodeck’s Report by Philippe Claudel
  • Greybeard by Brian Aldiss
  • Fictions by Jorge Luis Borges
  • The Long Way to a Small Angry Planet by Becky Chambers
  • The Dark Forest by Cixin Liu
  • Death’s End by Cixin Liu
  • The First Fifteen Lives of Harry August by Claire North

Seems kinda meagre to me. Either I need to read more books or I need to keep better track of what books I’m reading when. Starting now.

Twenty sixteen

When I took a look at back at 2015, it was to remark on how nicely uneventful it was. I wish I could say the same about 2016. Instead, this was the year that too damned much kept happening.

The big picture was dominated by Brexit and Trump, disasters that are sure to shape events for years to come. I try to keep the even bigger picture in perspective and remind myself that our species is doing well, and that we’re successfully battling poverty, illiteracy, violence, pollution, and disease. But it’s so hard sometimes. I still think the overall trend for this decade will be two steps forward, but the closing half is almost certain to be one step back.

Some people close to me have had a really shitty year. More than anything, I wish I could do more to help them.

Right now I’m thinking that one of the best things I could wish for 2017 is for it to be an uneventful year. I’d really like it if the end-of-year round-up in 365 days time had no world-changing events.

But for me personally? 2016 was fine. I didn’t accomplish any big goals—although I’m very proud to have published Resilient Web Design—but I’ve had fun at work, and as always, I’m very grateful for all the opportunities that came my way.

I ate some delicious food…

Short rib. Seabass with carrot-top pesto on beet greens and carrot purée. Bratwurst. Sausage and sauerkraut. Short ribs. Homemade pappardelle with pig cheek ragu. Barbecued Thai chicken. Daily oyster. Kebab. Chicharrones. Nightfireburger. Ribeye.

I went to beautiful places…

Popped in to see Caravaggio and Holbein. Our home for the week. Bodleian go where no one has gone before. Tram. Amsterdam’s looking lovely this morning. Stockholm street. Mauer. Ah, Venice! Barcelona. Malibu sunset. Cuskinny.

And I got to hang out with some lovely doggies…

Mia! Archie is my favourite @EnhanceConf speaker. Mesa, Lola, and @wordridden. Rainier McChedderton! I met Zero! Yay! Thanks, @wilto. On the bright side, Huxley is in the @Clearleft office today. The day Herbie came to visit @Clearleft. It’s Daphne. Poppy’s on patrol. Morty! Scribble is a good dog. Sleepy.

Have a happy—and uneventful—new year!

The many formats of Resilient Web Design

If you don’t like reading in a web browser, you might like to know that Resilient Web Design is now available in more formats.

Jiminy Panoz created a lovely EPUB version. I tried it out in Apple’s iBooks app and it looks great. I tried to submit it to the iBooks store too, but that process threw up a few too many roadblocks.

Oliver Williams has created a MOBI version. That’s means you can read it on a Kindle. I plugged my old Kindle into my computer, dragged that file onto its disc image, and it worked a treat.

And there’s always the PDF versions; one in portrait and another in landscape format. Those were generated straight from the print styles.

Oh, and there’s the podcast. I’ve only released two chapters so far. The Christmas break and an untimely cold have slowed down the release schedule a little bit.

I’d love to make a physical, print-on-demand version of Resilient Web Design available—maybe through Lulu—but my InDesign skills are non-existent.

If you think the book should be available in any other formats, and you fancy having a crack at it, please feel free to use the source files.

Deep linking with fragmentions

When I was marking up Resilient Web Design I wanted to make sure that people could link to individual sections within a chapter. So I added IDs to all the headings. There’s no UI to expose that though—like the hover pattern that some sites use to show that something is linkable—so unless you know the IDs are there, there’s no way of getting at them other than “view source.”

But if you’re reading a passage in Resilient Web Design and you highlight some text, you’ll notice that the URL updates to include that text after a hash symbol. If that updated URL gets shared, then anyone following it should be sent straight to that string of text within the page. That’s fragmentions in action:

Fragmentions find the first matching word or phrase in a document and focus its closest surrounding element. The match is determined by the case-sensitive string following the # (or ## double-hash)

It’s a similar idea to Eric and Simon’s proposal to use CSS selectors as fragment identifiers, but using plain text instead. You can find out more about the genesis of fragmentions from Kevin. I’m using Jonathon Neal’s script with some handy updates from Matthew.

I’m using the fragmention support to power the index of the book. It relies on JavaScript to work though, so Matthew has come to the rescue again and created a version of the site with IDs for each item linked from the index (I must get around to merging that).

The fragmention functionality is ticking along nicely with one problem…

I’ve tweaked the typography of Resilient Web Design to within an inch of its life, including a crude but effective technique to avoid widowed words at the end of a paragraph. The last two words of every paragraph are separated by a UTF-8 no-break space character instead of a regular space.

That solves the widowed words problem, but it confuses the fragmention script. Any selected text that includes the last two words of a paragraph fails to match. I’ve tried tweaking the script, but I’m stumped. If you fancy having a go, please have at it.

Update: And fixed! Thanks to Lee.

Print styles

I really wanted to make sure that the print styles for Resilient Web Design were pretty good—or at least as good as they could be given the everlasting lack of support for many print properties in browsers.

Here’s the first thing I added:

@media print {
  @page {
    margin: 1in 0.5in 0.5in;
    orphans: 4;
    widows: 3;
  }
}

That sets the margins of printed pages in inches (I could’ve used centimetres but the numbers were nice and round in inches). The orphans: 4 declaration says that if there’s less than 4 lines on a page, shunt the text onto the next page. And widows: 3 declares that there shouldn’t be less than 3 lines left alone on a page (instead more lines will be carried over from the previous page).

I always get widows and orphans confused so I remind myself that orphans are left alone at the start; widows are left alone at the end.

I try to make sure that some elements don’t get their content split up over page breaks:

@media print {
  p, li, pre, figure, blockquote {
    page-break-inside: avoid;
  }
}

I don’t want headings appearing at the end of a page with no content after them:

@media print {
  h1,h2,h3,h4,h5 {
    page-break-after: avoid;
  }
}

But sections should always start with a fresh page:

@media print {
  section {
    page-break-before: always;
  }
}

There are a few other little tweaks to hide some content from printing, but that’s pretty much it. Using print preview in browsers showed some pretty decent formatting. In fact, I used the “Save as PDF” option to create the PDF versions of the book. The portrait version comes from Chrome’s preview. The landscape version comes from Firefox, which offers more options under “Layout”.

For some more print style suggestions, have a look at the article I totally forgot about print style sheets. There’s also tips and tricks for print style sheets on Smashing Magazine. That includes a clever little trick for generating QR codes that only appear when a document is printed. I’ve used that technique for some page types over on The Session.

Design principles

Andrew Travers wrote about designing design principles at Co-op Digital. I’m somewhat obsessed with design principles—hence my collection—so I’m also obsessed with figuring out what makes for “good” design principles.

One of my favourite design principles (yes, I have favourites) is from the HTML Design Principles. It’s the priority of constituencies:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

The emphasis my own. It demonstrates how the design principle can be put to use (“in case of conflict”). Andrew also describes uses for the design principles they’re putting together:

What we’re building towards is a set of principles, few enough to be memorable, short enough to be repeatable, relevant enough to be usable. When we’re running a design crit, it’s these principles that we want to lean on. When a sole designer in an agile delivery team is talking about a design approach, it’s these principles that back her up.

Those sound like good use cases to me. Those are situations when design principles can help people reach agreement on priorities, without it having to be about ego or who shouts loudest.

I think it was from Cennydd that I heard about a really good test of a design principle: is it reversible? In other words, could you imagine the exact opposite of the design principle being perfectly valid in a different organisation or on a different project? If not, then the principle may be too weak to be effective. (Cennydd points out that he heard this from Jared who has written a lot about evaluating design principles.)

“Make it easy to use” would be an example of a weak design principle. It’s hard to imagine a situation where “make it hard to use” would be a reasonable guiding principle.

Frankly, there are plenty of “bad” examples in my collection of design principles. Many of them wouldn’t pass the reversibility test. Just recently though, I spotted some that would pass the test with flying colours. They weren’t even labelled as design principles—they’re the tips that Heydon includes at the end of his excellent 24 Ways article on inclusive design:

  • Involve code early
  • Respect conventions
  • Don’t be exact
  • Enforce simplicity

I could easily imagine endeavours where the complete opposite of those tips would be valued. Personally, I think they’re really great design princples.

I should add them to the list.

The typography of a web book

I’m a sucker for classic old-style serif typefaces: Caslon, Baskerville, Bembo, Garamond …I love ‘em. That’s probably why I’ve always found the typesetting in Edward Tufte’s books so appealing—he always uses a combination of Bembo for body copy and Gill Sans for headings.

Earlier this year I stumbled on a screen version of Bembo used for Tufte’s digital releases called ET Book. Best of all, it’s open source:

ET Book is a Bembo-like font for the computer designed by Dmitry Krasny, Bonnie Scranton, and Edward Tufte. It is free and open-source.

When I was styling Resilient Web Design, I knew that the choice of typeface would be one of the most important decisions I would make. Remembering that open source ET Book font, I plugged it in to see how it looked. I liked what I saw. I found it particularly appealing when it’s full black on full white at a nice big size (with lower contrast or sizes, it starts to get a bit fuzzy).

I love, love, love the old-style numerals of ET Book. But I was disappointed to see that ligatures didn’t seem to be coming through (even when I had enabled them in CSS). I mentioned this to Rich and of course he couldn’t resist doing a bit of typographic sleuthing. It turns out that the ligature glyphs are there in the source files but the files needed a little tweaking to enable them. Because the files are open source, Rich was able to tweak away to his heart’s content. I then took the tweaked open type files and ran them through Font Squirrel to generate WOFF and WOFF2 files. I’ve put them on Github.

For this book, I decided that the measure would be the priority. I settled on a measure of around 55 to 60 characters—about 10 or 11 words per line. I used a max-width of 27em combined with Mike’s brilliant fluid type technique to maintain a consistent measure.

It looks great on small-screen devices and tablets. On large screens, the font size starts to get really, really big. Personally, I like that. Lots of other people like it too. But some people really don’t like it. I should probably add a font-resizing widget for those who find the font size too shocking on luxuriously large screens. In the meantime, their only recourse is to fork the CSS to make their own version of the book with more familiar font sizes.

The visceral reaction a few people have expressed to the font size reminds me of the flak Jeffrey received when he redesigned his personal site a few years back:

Many people who’ve visited this site since the redesign have commented on the big type. It’s hard to miss. After all, words are practically the only feature I haven’t removed. Some of the people say they love it. Others are undecided. Many are still processing. A few say they hate it and suggest I’ve lost my mind.

I wonder how the people who complained then are feeling now, a few years on, in a world with Medium in it? Jeffrey’s redesign doesn’t look so extreme any more.

Resilient Web Design will be on the web for a very, very, very long time. I’m curious to see if its type size will still look shockingly large in years to come.

Introducing Resilient Web Design

I wrote a thing. The thing is a book. But the book is not published on paper. This book is on the web. It’s a web book. Or “wook” if you prefer …please don’t prefer. Here it is:

Resilient Web Design.

It’s yours for free.

Much of the subject matter will be familiar if you’ve seen my conference talks from the past couple of years, particularly Enhance! and Resilience. But the book ended up taking some twists and turns that surprised me. It turned out to be a bit of a history book: the history of design, the history of the web.

Resilient Web Design is a short book. It’s between sixteen and seventeen megawords long. You could read the whole thing in a couple of hours. Or—because the book has seven chapters—you could take fifteen to twenty minutes out of a day to read one chapter and you’d have read the whole thing done in a week.

If you make websites in any capacity, I hope that this book will resonate with you. Even if you don’t make websites, I still hope there’s an interesting story in there for you.

You can read the whole book on the web, but if you’d rather have a single file to carry around, I’ve made some PDFs as well: one in portrait, one in landscape.

I’ve licensed the book quite liberally. It’s released under a Creative Commons attribution share-alike licence. That means you can re-use the material in any way you want (even commercial usage) as long as you provide some attribution and use the same licence. So if you’d like to release the book in some other format like ePub or anything, go for it.

I’m currently making an audio version of Resilient Web Design. I’ll be releasing it one chapter at a time as a podcast. Here’s the RSS feed if you want to subscribe to it. Or you can subscribe directly from iTunes.

I took my sweet time writing this book. I wrote the first chapter in March 2015. I wrote the last chapter in May 2016. Then I sat on it for a while, figuring out what to do with it. Eventually I decided to just put the whole thing up on the web—it seems fitting.

Whereas the writing took over a year of solid procrastination, making the website went surprisingly quickly. After one weekend of marking up and styling, I had most of it ready to go. Then I spent a while tweaking. The source files are on Github.

I’m pretty happy with the end result. I’ll write a bit more about some of the details over the next while—the typography, the offline functionality, print styles, and stuff like that. In the meantime, I hope you’ll peruse this little book at your leisure…

Resilient Web Design.

If you like it, please spread the word.

Certbot renewals with Apache

I wrote a while back about switching to HTTPS on Apache 2.4.7 on Ubuntu 14.04 on Digital Ocean. In that post, I pointed to an example .conf file.

I’ve been having a few issues with my certificate renewals with Certbot (the artist formerly known as Let’s Encrypt). If I did a dry-run for renewing my certificates…

/etc/certbot-auto renew --dry-run

… I kept getting this message:

Encountered vhost ambiguity but unable to ask for user guidance in non-interactive mode. Currently Certbot needs each vhost to be in its own conf file, and may need vhosts to be explicitly labelled with ServerName or ServerAlias directories. Falling back to default vhost *:443…

It turns out that Certbot doesn’t like HTTP and HTTPS configurations being lumped into one .conf file. Instead it expects to see all the port 80 stuff in a domain.com.conf file, and the port 443 stuff in a domain.com-ssl.conf file.

So I’ve taken that original .conf file and split it up into two.

First I SSH’d into my server and went to the Apache directory where all these .conf files live:

cd /etc/apache2/sites-available

Then I copied the current (single) file to make the SSL version:

cp yourdomain.com.conf yourdomain.com-ssl.conf

Time to fire up one of those weird text editors to edit that newly-created file:

nano yourdomain.com-ssl.conf

I deleted everything related to port 80—all the stuff between (and including) the VirtualHost *:80 tags:

<VirtualHost *:80>
...
</VirtualHost>

Hit ctrl and o, press enter in response to the prompt, and then hit ctrl and x.

Now I do the opposite for the original file:

nano yourdomain.com.conf

Delete everything related to VirtualHost *:443:

<VirtualHost *:443>
...
</VirtualHost>

Once again, I hit ctrl and o, press enter in response to the prompt, and then hit ctrl and x.

Now I need to tell Apache about the new .conf file:

a2ensite yourdomain.com-ssl.conf

I’m told that’s cool and all, but that I need to restart Apache for the changes to take effect:

service apache2 restart

Now when I test the certificate renewing process…

/etc/certbot-auto renew --dry-run

…everything goes according to plan.

Fractal ways

24 Ways is back! That’s how we web nerds know that the Christmas season is here. It kicked off this year with a most excellent bit of hardware hacking from Seb: Internet of Stranger Things.

The site is looking lovely as always. There’s also a component library to to accompany it: Bits, the front-end component library for 24 ways. Nice work, courtesy of Paul. (I particularly like the comment component example).

The component library is built with Fractal, the magnificent tool that Mark has open-sourced. We’ve been using at Clearleft for a while now, but we haven’t had a chance to make any of the component libraries public so it’s really great to be able to point to the 24 Ways example. The code is all on Github too.

There’s a really good buzz around Fractal right now. Lots of people in the design systems Slack channel are talking about it. There’s also a dedicated Fractal Slack channel for people getting into the nitty-gritty of using the tool.

If you’re currently wrestling with the challenges of putting a front-end component library together, be sure to give Fractal a whirl.

Starting out

I had a really enjoyable time at Codebar Brighton last week, not least because Morty came along.

I particularly enjoy teaching people who have zero previous experience of making a web page. There’s something about explaining HTML and CSS from first principles that appeals to me. I especially love it when people ask lots of questions. “What does this element do?”, “Why do some elements have closing tags and others don’t?”, “Why is it textarea and not input type="textarea"?” The answer usually involves me going down a rabbit-hole of web archeology, so I’m in my happy place.

But there’s only so much time at Codebar each week, so it’s nice to be able to point people to other resources that they can peruse at their leisure. It turns out that’s it’s actually kind of tricky to find resources at that level. There are lots of great articles and tutorials out there for professional web developers—Smashing Magazine, A List Apart, CSS Tricks, etc.—but no so much for complete beginners.

Here are some of the resources I’ve found:

  • MarkSheet by Jeremy Thomas is a free HTML and CSS tutorial. It starts with an explanation of the internet, then the World Wide Web, and then web browsers, before diving into HTML syntax. Jeremy is the same guy who recently made CSS Reference.
  • Learn to Code HTML & CSS by Shay Howe is another free online book. You can buy a paper copy too. It’s filled with good, clear explanations.
  • Zero to Hero Coding by Vera Deák is an ongoing series. She’s starting out on her career as a front-end developer, so her perspective is particularly valuable.

If I find any more handy resources, I’ll link to them and tag them with “learning”.

Teasing

Caption Blockquote Chapter 1 Formats Index About the author Table of contents

Between the braces

In a post called Side Effects in CSS that he wrote a while back, Philip Walton talks about different kinds of challenges in writing CSS:

There are two types of problems in CSS: cosmetic problems and architectural problems.

The cosmetic problems are solved by making something look the way you want it to. The architectural problems are trickier because they have more long-term effects—maintainability, modularity, encapsulation; all that tricky stuff. Philip goes on to say:

If I had to choose between hiring an amazing designer who could replicate even the most complicated visual challenges easily in code and someone who understood the nuances of predictable and maintainable CSS, I’d choose the latter in a heartbeat.

This resonates with something I noticed a while back while I was doing some code reviews. Most of the time when I’m analysing CSS and trying to figure out how “good” it is—and I know that’s very subjective—I’m concerned with what’s on the outside of the curly braces.

selector {
    property: value;
}

The stuff inside the curly braces—the properties and values—that’s where the cosmetic problems get solved. It’s also the stuff that you can look up; I certainly don’t try to store all possible CSS properties and values in my head. It’s also easy to evaluate: Does it make the thing look like you want it to look? Yes? Good. It works.

The stuff outside the curly braces—the selectors—that’s harder to judge. It needs to be evaluated with lots of “what ifs”: What if this selects something you didn’t intend to? What if the markup changes? What if someone else writes some CSS that negates this?

I find it fascinating that most of the innovation in CSS from the browser makers and standards bodies arrives in the form of new properties and values—flexbox, grid, shapes, viewport units, and so on. Meanwhile there’s a whole other world of problems to be solved outside the curly braces. There’s not much that the browser makers or standards bodies can do to help us there. I think that’s why most of the really interesting ideas and thoughts around CSS in recent years have focused on that challenge.

Less JavaScript

Every front-end developer at Clearleft went to FFConf last Friday: me, Mark, Graham, Charlotte, and Danielle. We weren’t about to pass up the opportunity to attend a world-class dev conference right here in our home base of Brighton.

The day was unsurprisingly excellent. All the speakers brought their A-game on a wide range of topics. Of course JavaScript was covered, but there was also plenty of mindfood on CSS, accessibility, progressive enhancement, dev tools, creative coding, and even emoji.

Normally FFConf would be a good opportunity to catch up with some Pauls from the Google devrel team, but because of an unfortunate scheduling clash this year, all the Pauls were at Chrome Dev Summit 2016 on the other side of the Atlantic.

I’ve been catching up on the videos from the event. There’s plenty of tech-related stuff: dev tools, web components, and plenty of talk about progressive web apps. But there was also a very, very heavy focus on performance. I don’t just mean performance at the shallow scale of file size and optimisation, but a genuine questioning of the impact of our developer workflows and tools.

In his talk on service workers (what else?), Jake makes the point that not everything needs to be a single page app, echoing Ada’s talk at FFConf.

He makes the point that if you really want fast rendering, nothing on the client side quite beats a server render.

They’ve written a lot of JavaScript to make this quite slow.

Unfortunately, all too often, I hear people say that a progressive web app must be a single page app. And I am not so sure. You might not need a single page app. A single page app can end up being a lot of work and slower. There’s a lot of cargo-culting around single page apps.

Alex followed up his barnstorming talk from the Polymer Summit with some more uncomfortable truths about how mobile phones work.

Cell networks are basically kryptonite to the protocols and assumptions that the web was built on.

And JavaScript frameworks aren’t helping. Quite the opposite.

But make no mistake: if you’re using one of today’s more popular JavaScript frameworks in the most naive way, you are failing by default. There is no sugarcoating this.

Today’s frameworks are mostly a sign of ignorance, or privilege, or both. The good news is that we can fix the ignorance.

Resilience retires

I spoke at the GOTO conference in Berlin this week. It was the final outing of a talk I’ve been giving for about a year now called Resilience.

Looking back over my speaking engagements, I reckon I must have given this talk—in one form or another—about sixteen times. If by some statistical fluke or through skilled avoidance strategies you managed not to see the talk, you can still have it rammed down your throat by reading a transcript of the presentation.

That particular outing is from Beyond Tellerrand earlier this year in Düsseldorf. That’s one of the events that recorded a video of the talk. Here are all the videos of it I could find:

Or, if you prefer, here’s an audio file. And here are the slides but they won’t make much sense by themselves.

Resilience is a mixture of history lesson and design strategy. The history lesson is about the origins of the internet and the World Wide Web. The design strategy is a three-pronged approach:

  1. Identify core functionality.
  2. Make that functionality available using the simplest technology.
  3. Enhance!

And if you like that tweet-sized strategy, you can get it on a poster. Oh, and check this out: Belgian student Sébastian Seghers published a school project on the talk.

Now, you might be thinking that the three-headed strategy sounds an awful lot like progressive enhancement, and you’d be right. I think every talk I’ve ever given has been about progressive enhancement to some degree. But with this presentation I set myself a challenge: to talk about progressive enhancement without ever using the phrase “progressive enhancement”. This is something I wrote about last year—if the term “progressive enhancement” is commonly misunderstood by the very people who would benefit from hearing this message, maybe it’s best to not mention that term and talk about the benefits of progressive enhancement instead: robustness, resilience, and technical credit. I think that little semantic experiment was pretty successful.

While the time has definitely come to retire the presentation, I’m pretty pleased with it, and I feel like it got better with time as I adjusted the material. The most common format for the talk was 40 to 45 minutes long, but there was an extended hour-long “director’s cut” that only appeared at An Event Apart. That included an entire subplot about Arthur C. Clarke and the invention of the telegraph (I’m still pretty pleased with the segue I found to weave those particular threads together).

Anyway, with the Resilience talk behind me, my mind is now occupied with the sequel: Evaluating Technology. I recently shared my research material for this one and, as you may have gathered, it takes me a loooong time to put a presentation like this together (which, by the same token, is one of the reasons why I end up giving the same talk multiple times within a year).

This new talk had its debut at An Event Apart in San Francisco two weeks ago. Jeffrey wrote about it and I’m happy to say he liked it. This bodes well—I’m already booked in for An Event Apart Seattle in April. I’ll also be giving an abridged version of this new talk at next year’s Render conference.

But that’s it for my speaking schedule for now. 2016 is all done and dusted, and 2017 is looking wide open. I hope I’ll get some more opportunities to refine and adjust the Evaluating Technology talk at some more events. If you’re a conference organiser and it sounds like something you’d be interested in, get in touch.

In the meantime, it’s time for me to pack away the Resilience talk, and wheel down into the archives, just like the closing scene of Raiders Of The Lost Ark. The music swells. The credits roll. The image fades to black.