Tags: ie

193

sparkline

Making Resilient Web Design work offline

I’ve written before about taking an online book offline, documenting the process behind the web version of HTML5 For Web Designers. A book is quite a static thing so it’s safe to take a fairly aggressive offline-first approach. In fact, a static unchanging book is one of the few situations that AppCache works for. Of course a service worker is better, but until AppCache is removed from browsers (and until service worker is supported across the board), I’m using both. I wouldn’t recommend that for most sites though—for most sites, use a service worker to enhance it, and avoid AppCache like the plague.

For Resilient Web Design, I took a similar approach to HTML5 For Web Designers but I knew that there was a good chance that some of the content would be getting tweaked at least for a while. So while the approach is still cache-first, I decided to keep the cache fairly fresh.

Here’s my service worker. It starts with the usual stuff: when the service worker is installed, there’s a list of static assets to cache. In this case, that list is literally everything; all the HTML, CSS, JavaScript, and images for the whole site. Again, this is a pattern that works well for a book, but wouldn’t be right for other kinds of websites.

The real heavy lifting happens with the fetch event. This is where the logic sits for what the service worker should do everytime there’s a request for a resource. I’ve documented the logic with comments:

// Look in the cache first, fall back to the network
  // CACHE
  // Did we find the file in the cache?
      // If so, fetch a fresh copy from the network in the background
      // NETWORK
          // Stash the fresh copy in the cache
  // NETWORK
  // If the file wasn't in the cache, make a network request
      // Stash a fresh copy in the cache in the background
  // OFFLINE
  // If the request is for an image, show an offline placeholder
  // If the request is for a page, show an offline message

So my order of preference is:

  1. Try the cache first,
  2. Try the network second,
  3. Fallback to a placeholder as a last resort.

Leaving aside that third part, regardless of whether the response is served straight from the cache or from the network, the cache gets a top-up. If the response is being served from the cache, there’s an additional network request made to get a fresh copy of the resource that was just served. This means that the user might be seeing a slightly stale version of a file, but they’ll get the fresher version next time round.

Again, I think this acceptable for a book where the tweaks and changes should be fairly minor, but I definitely wouldn’t want to do it on a more dynamic site where the freshness matters more.

Here’s what it usually likes like when a file is served up from the cache:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      return responseFromCache;
  }

I’ve introduced an extra step where the fresher version is fetched from the network. This is where the code can look a bit confusing: the network request is happening in the background after the cached file has already been returned, but the code appears before the return statement:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      // If so, fetch a fresh copy from the network in the background
      event.waitUntil(
          // NETWORK
          fetch(request)
          .then( responseFromFetch => {
              // Stash the fresh copy in the cache
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      return responseFromCache;
  }

It’s asynchronous, see? So even though all that network code appears before the return statement, it’s pretty much guaranteed to complete after the cache response has been returned. You can verify this by putting in some console.log statements:

caches.match(request)
.then( responseFromCache => {
  if (responseFromCache) {
      event.waitUntil(
          fetch(request)
          .then( responseFromFetch => {
              console.log('Got a response from the network.');
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      console.log('Got a response from the cache.');
      return responseFromCache;
  }

Those log statements will appear in this order:

Got a response from the cache.
Got a response from the network.

That’s the opposite order in which they appear in the code. Everything inside the event.waitUntil part is asynchronous.

Here’s the catch: this kind of asynchronous waitUntil hasn’t landed in all the browsers yet. The code I’ve written will fail.

But never fear! Jake has written a polyfill. All I need to do is include that at the start of my serviceworker.js file and I’m good to go:

// Import Jake's polyfill for async waitUntil
importScripts('/js/async-waituntil.js');

I’m also using it when a file isn’t found in the cache, and is returned from the network instead. Here’s what the usual network code looks like:

fetch(request)
  .then( responseFromFetch => {
    return responseFromFetch;
  })

I want to also store that response in the cache, but I want to do it asynchronously—I don’t care how long it takes to put the file in the cache as long as the user gets the response straight away.

Technically, I’m not putting the response in the cache; I’m putting a copy of the response in the cache (it’s a stream, so I need to clone it if I want to do more than one thing with it).

fetch(request)
  .then( responseFromFetch => {
    // Stash a fresh copy in the cache in the background
    let responseCopy = responseFromFetch.clone();
    event.waitUntil(
      caches.open(staticCacheName)
      .then( cache => {
          cache.put(request, responseCopy);
      })
    );
    return responseFromFetch;
  })

That all seems to be working well in browsers that support service workers. For legacy browsers, like Mobile Safari, there’s the much blunter caveman logic of an AppCache manifest.

Here’s the JavaScript that decides whether a browser gets the service worker or the AppCache:

if ('serviceWorker' in navigator) {
  // If service workers are supported
  navigator.serviceWorker.register('/serviceworker.js');
} else if ('applicationCache' in window) {
  // Otherwise inject an iframe to use appcache
  var iframe = document.createElement('iframe');
  iframe.setAttribute('src', '/appcache.html');
  iframe.setAttribute('style', 'width: 0; height: 0; border: 0');
  document.querySelector('footer').appendChild(iframe);
}

Either way, people are making full use of the offline nature of the book and that makes me very happy indeed.

Contact

I left the office one evening a few weeks back, and while I was walking up the street, James Box cycled past, waving a hearty good evening to me. I didn’t see him at first. I was in a state of maximum distraction. For one thing, there was someone walking down the street with a magnificent Irish wolfhound. If that weren’t enough to dominate my brain, I also had headphones in my ears through which I was listening to an audio version of a TED talk by Donald Hoffman called Do we really see reality as it is?

It’s fascinating—if mind-bending—stuff. It sounds like the kind of thing that’s used to justify Deepak Chopra style adventures in la-la land, but Hoffman is deliberately taking a rigorous approach. He knows his claims are outrageous, but he welcomes all attempts to falsify his hypotheses.

I’m not noticing this just from a short TED talk. It’s been one of those strange examples of synchronicity where his work has been popping up on my radar multiple times. There’s an article in Quanta magazine that was also republished in The Atlantic. And there’s a really good interview on the You Are Not So Smart podcast that I huffduffed a while back.

But the most unexpected place that Hoffman popped up was when I was diving down a SETI (or METI) rabbit hole. There I was reading about the Cosmic Call project and Lincos when I came across this article: Why ‘Arrival’ Is Wrong About the Possibility of Talking with Space Aliens, with its subtitle “Human efforts to communicate with extraterrestrials are doomed to failure, expert says.” The expert in question pulling apart the numbers in the Drake equation turned out to be none other than Donald Hoffmann.

A few years ago, at a SETI Institute conference on interstellar communication, Hoffman appeared on the bill after a presentation by radio astronomer Frank Drake, who pioneered the search for alien civilizations in 1960. Drake showed the audience dozens of images that had been launched into space aboard NASA’s Voyager probes in the 1970s. Each picture was carefully chosen to be clearly and easily understood by other intelligent beings, he told the crowd.

After Drake spoke, Hoffman took the stage and “politely explained how every one of the images would be infinitely ambiguous to extraterrestrials,” he recalls.

I’m sure he’s quite right. But let’s face it, the Voyager golden record was never really about communicating with an alien intelligence …it was about how we present ourself.

Vertical limit

When I was first styling Resilient Web Design, I made heavy use of vh units. The vertical spacing between elements—headings, paragraphs, images—was all proportional to the overall viewport height. It looked great!

Then I tested it on real devices.

Here’s the problem: when a page loads up in a mobile browser—like, say, Chrome on an Android device—the URL bar is at the top of the screen. The height of that piece of the browser interface isn’t taken into account for the viewport height. That makes sense: the viewport height is the amount of screen real estate available for the content. The content doesn’t extend into the URL bar, therefore the height of the URL bar shouldn’t be part of the viewport height.

But then if you start scrolling down, the URL bar scrolls away off the top of the screen. So now it’s behaving as though it is part of the content rather than part of the browser interface. At this point, the value of the viewport height changes: now it’s the previous value plus the height of the URL bar that was previously there but which has now disappeared.

I totally understand why the URL bar is squirrelled away once the user starts scrolling—it frees up some valuable vertical space. But because that necessarily means recalculating the viewport height, it effectively makes the vh unit in CSS very, very limited in scope.

In my initial implementation of Resilient Web Design, the one where I was styling almost everything with vh, the site was unusable. Every time you started scrolling, things would jump around. I had to go back to the drawing board and remove almost all instances of vh from the styles.

I’ve left it in for one use case and I think it’s the most common use of vh: making an element take up exactly the height of the viewport. The front page of the web book uses min-height: 100vh for the title.

Scrolling.

But as soon as you scroll down from there, that element changes height. The content below it suddenly moves.

Let’s say the overall height of the browser window is 600 pixels, of which 50 pixels are taken up by the URL bar. When the page loads, 100vh is 550 pixels. But as soon as you scroll down and the URL bar floats away, the value of 100vh becomes 600 pixels.

(This also causes problems if you’re using vertical media queries. If you choose the wrong vertical breakpoint, then the media query won’t kick in when the page loads but will kick in once the user starts scrolling …or vice-versa.)

There’s a mixed message here. On the one hand, the browser is declaring that the URL bar is part of its interface; that the space is off-limits for content. But then, once scrolling starts, that is invalidated. Now the URL bar is behaving as though it is part of the content scrolling off the top of the viewport.

The result of this messiness is that the vh unit is practically useless for real-world situations with real-world devices. It works great for desktop browsers if you’re grabbing the browser window and resizing, but that’s not exactly a common scenario for anyone other than web developers.

I’m sure there’s a way of solving it with JavaScript but that feels like using an atomic bomb to crack a walnut—the whole point of having this in CSS is that we don’t need to use JavaScript for something related to styling.

It’s such a shame. A piece of CSS that’s great in theory, and is really well supported, just falls apart where it matters most.

Update: There’s a two-year old bug report on this for Chrome, and it looks like it might actually get fixed in February.

Twenty sixteen

When I took a look at back at 2015, it was to remark on how nicely uneventful it was. I wish I could say the same about 2016. Instead, this was the year that too damned much kept happening.

The big picture was dominated by Brexit and Trump, disasters that are sure to shape events for years to come. I try to keep the even bigger picture in perspective and remind myself that our species is doing well, and that we’re successfully battling poverty, illiteracy, violence, pollution, and disease. But it’s so hard sometimes. I still think the overall trend for this decade will be two steps forward, but the closing half is almost certain to be one step back.

Some people close to me have had a really shitty year. More than anything, I wish I could do more to help them.

Right now I’m thinking that one of the best things I could wish for 2017 is for it to be an uneventful year. I’d really like it if the end-of-year round-up in 365 days time had no world-changing events.

But for me personally? 2016 was fine. I didn’t accomplish any big goals—although I’m very proud to have published Resilient Web Design—but I’ve had fun at work, and as always, I’m very grateful for all the opportunities that came my way.

I ate some delicious food…

Short rib. Seabass with carrot-top pesto on beet greens and carrot purée. Bratwurst. Sausage and sauerkraut. Short ribs. Homemade pappardelle with pig cheek ragu. Barbecued Thai chicken. Daily oyster. Kebab. Chicharrones. Nightfireburger. Ribeye.

I went to beautiful places…

Popped in to see Caravaggio and Holbein. Our home for the week. Bodleian go where no one has gone before. Tram. Amsterdam’s looking lovely this morning. Stockholm street. Mauer. Ah, Venice! Barcelona. Malibu sunset. Cuskinny.

And I got to hang out with some lovely doggies…

Mia! Archie is my favourite @EnhanceConf speaker. Mesa, Lola, and @wordridden. Rainier McChedderton! I met Zero! Yay! Thanks, @wilto. On the bright side, Huxley is in the @Clearleft office today. The day Herbie came to visit @Clearleft. It’s Daphne. Poppy’s on patrol. Morty! Scribble is a good dog. Sleepy.

Have a happy—and uneventful—new year!

The many formats of Resilient Web Design

If you don’t like reading in a web browser, you might like to know that Resilient Web Design is now available in more formats.

Jiminy Panoz created a lovely EPUB version. I tried it out in Apple’s iBooks app and it looks great. I tried to submit it to the iBooks store too, but that process threw up a few too many roadblocks.

Oliver Williams has created a MOBI version. That’s means you can read it on a Kindle. I plugged my old Kindle into my computer, dragged that file onto its disc image, and it worked a treat.

And there’s always the PDF versions; one in portrait and another in landscape format. Those were generated straight from the print styles.

Oh, and there’s the podcast. I’ve only released two chapters so far. The Christmas break and an untimely cold have slowed down the release schedule a little bit.

I’d love to make a physical, print-on-demand version of Resilient Web Design available—maybe through Lulu—but my InDesign skills are non-existent.

If you think the book should be available in any other formats, and you fancy having a crack at it, please feel free to use the source files.

Deep linking with fragmentions

When I was marking up Resilient Web Design I wanted to make sure that people could link to individual sections within a chapter. So I added IDs to all the headings. There’s no UI to expose that though—like the hover pattern that some sites use to show that something is linkable—so unless you know the IDs are there, there’s no way of getting at them other than “view source.”

But if you’re reading a passage in Resilient Web Design and you highlight some text, you’ll notice that the URL updates to include that text after a hash symbol. If that updated URL gets shared, then anyone following it should be sent straight to that string of text within the page. That’s fragmentions in action:

Fragmentions find the first matching word or phrase in a document and focus its closest surrounding element. The match is determined by the case-sensitive string following the # (or ## double-hash)

It’s a similar idea to Eric and Simon’s proposal to use CSS selectors as fragment identifiers, but using plain text instead. You can find out more about the genesis of fragmentions from Kevin. I’m using Jonathon Neal’s script with some handy updates from Matthew.

I’m using the fragmention support to power the index of the book. It relies on JavaScript to work though, so Matthew has come to the rescue again and created a version of the site with IDs for each item linked from the index (I must get around to merging that).

The fragmention functionality is ticking along nicely with one problem…

I’ve tweaked the typography of Resilient Web Design to within an inch of its life, including a crude but effective technique to avoid widowed words at the end of a paragraph. The last two words of every paragraph are separated by a UTF-8 no-break space character instead of a regular space.

That solves the widowed words problem, but it confuses the fragmention script. Any selected text that includes the last two words of a paragraph fails to match. I’ve tried tweaking the script, but I’m stumped. If you fancy having a go, please have at it.

Update: And fixed! Thanks to Lee.

Print styles

I really wanted to make sure that the print styles for Resilient Web Design were pretty good—or at least as good as they could be given the everlasting lack of support for many print properties in browsers.

Here’s the first thing I added:

@media print {
  @page {
    margin: 1in 0.5in 0.5in;
    orphans: 4;
    widows: 3;
  }
}

That sets the margins of printed pages in inches (I could’ve used centimetres but the numbers were nice and round in inches). The orphans: 4 declaration says that if there’s less than 4 lines on a page, shunt the text onto the next page. And widows: 3 declares that there shouldn’t be less than 3 lines left alone on a page (instead more lines will be carried over from the previous page).

I always get widows and orphans confused so I remind myself that orphans are left alone at the start; widows are left alone at the end.

I try to make sure that some elements don’t get their content split up over page breaks:

@media print {
  p, li, pre, figure, blockquote {
    page-break-inside: avoid;
  }
}

I don’t want headings appearing at the end of a page with no content after them:

@media print {
  h1,h2,h3,h4,h5 {
    page-break-after: avoid;
  }
}

But sections should always start with a fresh page:

@media print {
  section {
    page-break-before: always;
  }
}

There are a few other little tweaks to hide some content from printing, but that’s pretty much it. Using print preview in browsers showed some pretty decent formatting. In fact, I used the “Save as PDF” option to create the PDF versions of the book. The portrait version comes from Chrome’s preview. The landscape version comes from Firefox, which offers more options under “Layout”.

For some more print style suggestions, have a look at the article I totally forgot about print style sheets. There’s also tips and tricks for print style sheets on Smashing Magazine. That includes a clever little trick for generating QR codes that only appear when a document is printed. I’ve used that technique for some page types over on The Session.

The typography of a web book

I’m a sucker for classic old-style serif typefaces: Caslon, Baskerville, Bembo, Garamond …I love ‘em. That’s probably why I’ve always found the typesetting in Edward Tufte’s books so appealing—he always uses a combination of Bembo for body copy and Gill Sans for headings.

Earlier this year I stumbled on a screen version of Bembo used for Tufte’s digital releases called ET Book. Best of all, it’s open source:

ET Book is a Bembo-like font for the computer designed by Dmitry Krasny, Bonnie Scranton, and Edward Tufte. It is free and open-source.

When I was styling Resilient Web Design, I knew that the choice of typeface would be one of the most important decisions I would make. Remembering that open source ET Book font, I plugged it in to see how it looked. I liked what I saw. I found it particularly appealing when it’s full black on full white at a nice big size (with lower contrast or sizes, it starts to get a bit fuzzy).

I love, love, love the old-style numerals of ET Book. But I was disappointed to see that ligatures didn’t seem to be coming through (even when I had enabled them in CSS). I mentioned this to Rich and of course he couldn’t resist doing a bit of typographic sleuthing. It turns out that the ligature glyphs are there in the source files but the files needed a little tweaking to enable them. Because the files are open source, Rich was able to tweak away to his heart’s content. I then took the tweaked open type files and ran them through Font Squirrel to generate WOFF and WOFF2 files. I’ve put them on Github.

For this book, I decided that the measure would be the priority. I settled on a measure of around 55 to 60 characters—about 10 or 11 words per line. I used a max-width of 27em combined with Mike’s brilliant fluid type technique to maintain a consistent measure.

It looks great on small-screen devices and tablets. On large screens, the font size starts to get really, really big. Personally, I like that. Lots of other people like it too. But some people really don’t like it. I should probably add a font-resizing widget for those who find the font size too shocking on luxuriously large screens. In the meantime, their only recourse is to fork the CSS to make their own version of the book with more familiar font sizes.

The visceral reaction a few people have expressed to the font size reminds me of the flak Jeffrey received when he redesigned his personal site a few years back:

Many people who’ve visited this site since the redesign have commented on the big type. It’s hard to miss. After all, words are practically the only feature I haven’t removed. Some of the people say they love it. Others are undecided. Many are still processing. A few say they hate it and suggest I’ve lost my mind.

I wonder how the people who complained then are feeling now, a few years on, in a world with Medium in it? Jeffrey’s redesign doesn’t look so extreme any more.

Resilient Web Design will be on the web for a very, very, very long time. I’m curious to see if its type size will still look shockingly large in years to come.

Introducing Resilient Web Design

I wrote a thing. The thing is a book. But the book is not published on paper. This book is on the web. It’s a web book. Or “wook” if you prefer …please don’t prefer. Here it is:

Resilient Web Design.

It’s yours for free.

Much of the subject matter will be familiar if you’ve seen my conference talks from the past couple of years, particularly Enhance! and Resilience. But the book ended up taking some twists and turns that surprised me. It turned out to be a bit of a history book: the history of design, the history of the web.

Resilient Web Design is a short book. It’s between sixteen and seventeen megawords long. You could read the whole thing in a couple of hours. Or—because the book has seven chapters—you could take fifteen to twenty minutes out of a day to read one chapter and you’d have read the whole thing done in a week.

If you make websites in any capacity, I hope that this book will resonate with you. Even if you don’t make websites, I still hope there’s an interesting story in there for you.

You can read the whole book on the web, but if you’d rather have a single file to carry around, I’ve made some PDFs as well: one in portrait, one in landscape.

I’ve licensed the book quite liberally. It’s released under a Creative Commons attribution share-alike licence. That means you can re-use the material in any way you want (even commercial usage) as long as you provide some attribution and use the same licence. So if you’d like to release the book in some other format like ePub or anything, go for it.

I’m currently making an audio version of Resilient Web Design. I’ll be releasing it one chapter at a time as a podcast. Here’s the RSS feed if you want to subscribe to it. Or you can subscribe directly from iTunes.

I took my sweet time writing this book. I wrote the first chapter in March 2015. I wrote the last chapter in May 2016. Then I sat on it for a while, figuring out what to do with it. Eventually I decided to just put the whole thing up on the web—it seems fitting.

Whereas the writing took over a year of solid procrastination, making the website went surprisingly quickly. After one weekend of marking up and styling, I had most of it ready to go. Then I spent a while tweaking. The source files are on Github.

I’m pretty happy with the end result. I’ll write a bit more about some of the details over the next while—the typography, the offline functionality, print styles, and stuff like that. In the meantime, I hope you’ll peruse this little book at your leisure…

Resilient Web Design.

If you like it, please spread the word.

Between the braces

In a post called Side Effects in CSS that he wrote a while back, Philip Walton talks about different kinds of challenges in writing CSS:

There are two types of problems in CSS: cosmetic problems and architectural problems.

The cosmetic problems are solved by making something look the way you want it to. The architectural problems are trickier because they have more long-term effects—maintainability, modularity, encapsulation; all that tricky stuff. Philip goes on to say:

If I had to choose between hiring an amazing designer who could replicate even the most complicated visual challenges easily in code and someone who understood the nuances of predictable and maintainable CSS, I’d choose the latter in a heartbeat.

This resonates with something I noticed a while back while I was doing some code reviews. Most of the time when I’m analysing CSS and trying to figure out how “good” it is—and I know that’s very subjective—I’m concerned with what’s on the outside of the curly braces.

selector {
    property: value;
}

The stuff inside the curly braces—the properties and values—that’s where the cosmetic problems get solved. It’s also the stuff that you can look up; I certainly don’t try to store all possible CSS properties and values in my head. It’s also easy to evaluate: Does it make the thing look like you want it to look? Yes? Good. It works.

The stuff outside the curly braces—the selectors—that’s harder to judge. It needs to be evaluated with lots of “what ifs”: What if this selects something you didn’t intend to? What if the markup changes? What if someone else writes some CSS that negates this?

I find it fascinating that most of the innovation in CSS from the browser makers and standards bodies arrives in the form of new properties and values—flexbox, grid, shapes, viewport units, and so on. Meanwhile there’s a whole other world of problems to be solved outside the curly braces. There’s not much that the browser makers or standards bodies can do to help us there. I think that’s why most of the really interesting ideas and thoughts around CSS in recent years have focused on that challenge.

Resilience retires

I spoke at the GOTO conference in Berlin this week. It was the final outing of a talk I’ve been giving for about a year now called Resilience.

Looking back over my speaking engagements, I reckon I must have given this talk—in one form or another—about sixteen times. If by some statistical fluke or through skilled avoidance strategies you managed not to see the talk, you can still have it rammed down your throat by reading a transcript of the presentation.

That particular outing is from Beyond Tellerrand earlier this year in Düsseldorf. That’s one of the events that recorded a video of the talk. Here are all the videos of it I could find:

Or, if you prefer, here’s an audio file. And here are the slides but they won’t make much sense by themselves.

Resilience is a mixture of history lesson and design strategy. The history lesson is about the origins of the internet and the World Wide Web. The design strategy is a three-pronged approach:

  1. Identify core functionality.
  2. Make that functionality available using the simplest technology.
  3. Enhance!

And if you like that tweet-sized strategy, you can get it on a poster. Oh, and check this out: Belgian student Sébastian Seghers published a school project on the talk.

Now, you might be thinking that the three-headed strategy sounds an awful lot like progressive enhancement, and you’d be right. I think every talk I’ve ever given has been about progressive enhancement to some degree. But with this presentation I set myself a challenge: to talk about progressive enhancement without ever using the phrase “progressive enhancement”. This is something I wrote about last year—if the term “progressive enhancement” is commonly misunderstood by the very people who would benefit from hearing this message, maybe it’s best to not mention that term and talk about the benefits of progressive enhancement instead: robustness, resilience, and technical credit. I think that little semantic experiment was pretty successful.

While the time has definitely come to retire the presentation, I’m pretty pleased with it, and I feel like it got better with time as I adjusted the material. The most common format for the talk was 40 to 45 minutes long, but there was an extended hour-long “director’s cut” that only appeared at An Event Apart. That included an entire subplot about Arthur C. Clarke and the invention of the telegraph (I’m still pretty pleased with the segue I found to weave those particular threads together).

Anyway, with the Resilience talk behind me, my mind is now occupied with the sequel: Evaluating Technology. I recently shared my research material for this one and, as you may have gathered, it takes me a loooong time to put a presentation like this together (which, by the same token, is one of the reasons why I end up giving the same talk multiple times within a year).

This new talk had its debut at An Event Apart in San Francisco two weeks ago. Jeffrey wrote about it and I’m happy to say he liked it. This bodes well—I’m already booked in for An Event Apart Seattle in April. I’ll also be giving an abridged version of this new talk at next year’s Render conference.

But that’s it for my speaking schedule for now. 2016 is all done and dusted, and 2017 is looking wide open. I hope I’ll get some more opportunities to refine and adjust the Evaluating Technology talk at some more events. If you’re a conference organiser and it sounds like something you’d be interested in, get in touch.

In the meantime, it’s time for me to pack away the Resilience talk, and wheel down into the archives, just like the closing scene of Raiders Of The Lost Ark. The music swells. The credits roll. The image fades to black.

The road to Indie Web Camp LA

After An Event Apart San Francisco, which was—as always—excellent, it was time for me to get to the next event: Indie Web Camp Los Angeles. But I wasn’t going alone. Tantek was going too, and seeing as he has a car—a convertible, even—what better way to travel from San Francisco to LA than on the Pacific Coast Highway?

It was great—travelling through the land of Steinbeck and Guthrie at the speed of Kerouac and Springsteen. We stopped for the night at Pismo Beach and then continued on, rolling into Santa Monica at sunset.

Half Moon Bay. Roadtripping with @t. Pomponio beach. Windswept. Salinas. Refueling. Driving through the Californian night. Pismo Beach. On the beach. On the beach with @t. Stopping for a coffee in Santa Barbara. Leaving Pismo Beach. Chevron. Santa Barbara steps. On the road. Driving through Malibu. Malibu sunset. Sun worshippers. Sunset in Santa Monica.

The weekend was spent in the usual Indie Web Camp fashion: a day of BarCamp-style discussions, followed by a day of hacking on our personal websites.

I decided to follow on from what I did at the Brighton Indie Web Camp. There, I made a combined tag view—a way of seeing, for example, everything tagged with “indieweb” instead of just journal entries tagged with “indieweb” or links tagged with “indieweb”. I wanted to do the same thing with my archives. I have separate archives for my journal, my links, and my notes. What I wanted was a combined view.

After some hacking, I got it working. So now you can see combined archives by year, month, and day (I managed to add a sparkline to the month view as well):

I did face a bit of a conundrum. Both my home page stream and my tag pages show posts in reverse chronological order, with the newest posts at the top. I’ve decided to replicate that for the archive view, but I’m not sure if that’s the right decision. Maybe the list of years should begin with 2001 and end with 2016, instead of the other way around. And maybe when you’re looking at a month of posts, you should see the first posts in that month at the top.

Anyway, I’ll live with it in reverse chronological order for a while and see how it feels. I’m just glad I managed to get it down—I’ve been meaning to do it for quite a while. Once again, I’m amazed by how much gets accomplished when you’re in the same physical space as other helpful, motivated people all working on improving their indie web presence, little by little.

Greetings from Indie Web Camp LA. Indie Web Camping. Hacking away. Day two of Indie Web Camp LA.

A decade on Twitter

I wrote my first tweet ten years ago.

That’s the tweetiest of tweets, isn’t it? (and just look at the status ID—only five digits!)

Of course, back then we didn’t call them tweets. We didn’t know what to call them. We didn’t know what to make of this thing at all.

I say “we”, but when I signed up, there weren’t that many people on Twitter that I knew. Because of that, I didn’t treat it as a chat or communication tool. It was more like speaking into the void, like blogging is now. The word “microblogging” was one of the terms floating around, grasped by those of trying to get to grips with what this odd little service was all about.

Twenty days after I started posting to Twitter, I wrote about how more and more people that I knew were joining :

The usage of Twitter is, um, let’s call it… emergent. Whenever I tell anyone about it, their first question is “what’s it for?”

Fair question. But their isn’t really an answer. You send messages either from the website, your mobile phone, or chat. What you post and why you’d want to do it is entirely up to you.

I was quite the cheerleader for Twitter:

Overall, Twitter is full of trivial little messages that sometimes merge into a coherent conversation before disintegrating again. I like it. Instant messaging is too intrusive. Email takes too much effort. Twittering feels just right for the little things: where I am, what I’m doing, what I’m thinking.

“Twittering.” Don’t laugh. “Tweeting” sounded really silly at first too.

Now at this point, I could start reminiscing about how much better things were back then. I won’t, but it’s interesting to note just how different it was.

  • The user base was small enough that there was a public timeline of all activity.
  • The characters in your username counted towards your 140 characters. That’s why Tantek changed his handle to be simply “t”. I tried it for a day. I think I changed my handle to “jk”. But it was too confusing so I changed it back.
  • We weren’t always sure how to write our updates either—your username would appear at the start of the message, so lots of us wrote our updates in the third person present (Brian still does). I’m partial to using the present continuous. That was how I wrote my reaction to Chris’s weird idea for tagging updates.

I think about that whenever I see a hashtag on a billboard or a poster or a TV screen …which is pretty much every day.

At some point, Twitter updated their onboarding process to include suggestions of people to follow, subdivided into different categories. I ended up in the list of designers to follow. Anil Dash wrote about the results of being listed and it reflects my experience too. I got a lot of followers—it’s up to around 160,000 now—but I’m pretty sure most of them are bots.

There have been a lot of changes to Twitter over the years. In the early days, those changes were driven by how people used the service. That’s where the @-reply convention (and hashtags) came from.

Then something changed. The most obvious sign of change was the way that Twitter started treating third-party developers. Where they previously used to encourage and even promote third-party apps, the company began to crack down on anything that didn’t originate from Twitter itself. That change reflected the results of an internal struggle between the people at Twitter who wanted it to become an open protocol (like email), and those who wanted it to become a media company (like Yahoo). The media camp won.

Of course Twitter couldn’t possibly stay the same given its incredible growth (and I really mean incredible—when it started to appear in the mainstream, in films and on TV, it felt so weird: this funny little service that nerds were using was getting popular with everyone). Change isn’t necessarily bad, it’s just different. Your favourite band changed when they got bigger. South by Southwest changed when it got bigger—it’s not worse now, it’s just very different.

Frank described the changing the nature of Twitter perfectly in his post From the Porch to the Street:

Christopher Alexander made a great diagram, a spectrum of privacy: street to sidewalk to porch to living room to bedroom. I think for many of us Twitter started as the porch—our space, our friends, with the occasional neighborhood passer-by. As the service grew and we gained followers, we slid across the spectrum of privacy into the street.

I stopped posting directly to Twitter in May, 2014. Instead I now write posts on my site and then send a copy to Twitter. And thanks to the brilliant Brid.gy, I get replies, favourites and retweets sent back to my own site—all thanks to Webmention, which just become a W3C proposed recommendation.

It’s hard to put into words how good this feels. There’s a psychological comfort blanket that comes with owning your own data. I see my friends getting frustrated and angry as they put up with an increasingly alienating experience on Twitter, and I wish I could explain how much better it feels to treat Twitter as nothing more than a syndication service.

When Twitter rolls out changes these days, they certainly don’t feel like they’re driven by user behaviour. Quite the opposite. I’m currently in the bucket of users being treated to new @-reply behaviour. Tressie McMillan Cottom has written about just how terrible the new changes are. You don’t get to see any usernames when you’re writing a reply, so you don’t know exactly how many people are going to be included. And if you mention a URL, the username associated with that website may get added to the tweet. The end result is that you write something, you publish it, and then you think “that’s not what I wrote.” It feels wrong. It robs you of agency. Twitter have made lots of changes over the years, but this feels like the first time that they’re going to actively edit what you write, without your permission.

Maybe this is the final straw. Maybe this is the change that will result in long-time Twitter users abandoning the service. Maybe.

Me? Well, Twitter could disappear tomorrow and I wouldn’t mind that much. I’d miss seeing updates from friends who don’t have their own websites, but I’d carry on posting my short notes here on adactio.com. When I started posting to Twitter ten years ago, I was speaking (or microblogging) into the void. I’m still doing that ten years on, but under my terms. It feels good.

I’m not sure if my Twitter account will still exist ten years from now. But I’m pretty certain that my website will still be around.

And now, if you don’t mind…

I’m off to grab some lunch.

Fifteen

My site has been behaving strangely recently. It was nothing that I could put my finger on—it just seemed to be acting oddly. When I checked to see if everything was okay, I was told that everything was fine, but still, I sensed something that was amiss.

I’ve just realised what it was. Last week on the 30th of September, I didn’t do or say anything special. That was the problem. I had forgotten my blog’s anniversary.

I’m so sorry, adactio.com! Honestly, I had been thinking about it for all of September but then on the day, one thing led to another, I was busy, and it just completely slipped my mind.

So this is a bit late, but anyway …happy fifteenth anniversary to this journal!

We’ve been through a lot together in those fifteen years, haven’t we, /journal? Oh, the places we’ve been and the things we’ve seen!

I remember where we were on our tenth anniversary: Bologna. Remember we were there for the first edition of the From The Front conference? Now, five years on, we’ve just been to the final edition of that same event—a bittersweet occasion.

Like I said five years ago:

It has been a very rewarding, often cathartic experience so far. I know that blogging has become somewhat passé in this age of Twitter and Facebook but I plan to keep on keeping on right here in my own little corner of the web.

I should plan something special for September 30th, 2021 …just to make sure I don’t forget.

Someday

In the latest issue of Justin’s excellent Responsive Web Design weekly newsletter, he includes a segment called “The Snippet Show”:

This is what tells all our browsers on all our devices to set the viewport to be the same width of the current device, and to also set the initial scale to 1 (not scaled at all). This essentially allows us to have responsive design consistently.

<meta name="viewport" content="width=device-width, initial-scale=1">

The viewport value for the meta element was invented by Apple when the iPhone was released. Back then, it was a safe bet that most websites were wider than the iPhone’s 320 pixel wide display—most of them were 960 pixels wide …because reasons. So mobile Safari would automatically shrink those sites down to fit within the display. If you wanted to over-ride that behaviour, you had to use the meta viewport gubbins that they made up.

That was nine years ago. These days, if you’re building a responsive website, you still need to include that meta element.

That seems like a shame to me. I’m not suggesting that the default behaviour should switch to assuming a fluid layout, but maybe the browser could just figure it out. After all, the CSS will already be parsed by the time the HTML is rendering. Perhaps a quick test for the presence of a crawlbar could be used to trigger the shrinking behaviour. No crawlbar, no shrinking.

Maybe someday the assumption behind the current behaviour could be flipped—assume a website is responsive unless the author explicitly requests the shrinking behaviour. I’d like to think that could happen soon, but I suspect that a depressingly large number of sites are still fixed-width (I don’t even want to know—don’t tell me).

There are other browser default behaviours that might someday change. Right now, if I type example.com into a browser, it will first attempt to contact http://example.com rather than https://example.com. That means the example.com server has to do a redirect, costing the user valuable time.

You can mitigate this by putting your site on the HSTS preload list but wouldn’t it be nice if browsers first checked for HTTPS instead of HTTP? I don’t think that will happen anytime soon, but someday …someday.

Indie Web Camp Brighton 2016

Indie Web Camp Brighton 2016 is done and dusted. It’s hard to believe that it’s already in its fifth(!) year. As with previous years, it was a lot of fun.

IndieWebCampBrighton2016

The first day—the discussions day—covered a lot of topics. I led a session on service workers, where we brainstormed offline and caching strategies for personal websites.

There was a design session looking at alternatives to simply presenting everything in a stream. Some great ideas came out of that. And there was a session all about bookmarking and linking. That one really got my brain whirring with ideas for the second day—the making/coding day.

I’ve learned from previous Indie Web Camps that a good strategy for the second day is to have two tasks to tackle: one that’s really easy (so you’ve at least got that to demo at the end), and one that’s more ambitious. This time, I put together a list of potential goals, and then ordered them by difficulty. By the end of the day, I managed to get a few of them done.

First off, I added a small bit of code to my bookmarking flow, so that any time I link to something, I send a ping to the Internet Archive to grab a copy of that URL. So here’s a link I bookmarked to one of Remy’s blog posts, and here it is in the Wayback Machine—see how the date of storage matches the date of my link.

The code to do that was pretty straightforward. I needed to hit this endpoint:

http://web.archive.org/save/{url}

I also updated my bookmarklet for posting links so that, if I’ve highlighted any text on the page I’m linking to, that text is automatically pasted in to the description.

I tweaked my webmentions a bit so that if I receive a webmention that has a type of bookmark-of, that is displayed differently to a comment, or a like, or a share. Here’s an example of Aaron bookmarking one of my articles.

The more ambitious plan was to create an over-arching /tags area for my site. I already have tag-based navigation for my journal and my links:

But until this weekend, I didn’t have the combined view:

I didn’t get around to adding pagination. That’s something I should definitely add, because some of those pages get veeeeery long. But I did spend some time adding sparklines. They can be quite revealing, especially on topics that were hot ten years ago, but have faded over time, or topics that have becoming more and more popular with each year.

All in all, a very productive weekend.

European tour

I’m recovering from an illness that laid me low a few weeks back. I had a nasty bout of man-flu which then led to a chest infection for added coughing action. I’m much better now, but alas, this illness meant I had to cancel my trip to Chicago for An Event Apart. I felt very bad about that. Not only was I reneging on a commitment, but I also missed out on an opportunity to revisit a beautiful city. But it was for the best. If I had gone, I would have spent nine hours in an airborne metal tube breathing recycled air, and then stayed in a hotel room with that special kind of air conditioning that hotels have that always seem to give me the sniffles.

Anyway, no point regretting a trip that didn’t happen—time to look forward to my next trip. I’m about to embark on a little mini tour of some lovely European cities:

  • Tomorrow I travel to Stockholm for Nordic.js. I’ve never been to Stockholm. In fact I’ve only stepped foot in Sweden on a day trip to Malmö to hang out with Emil. I’m looking forward to exploring all that Stockholm has to offer.
  • On Saturday I’ll go straight from Stockholm to Berlin for the View Source event organised by Mozilla. Looks like I’ll be staying in the east, which isn’t a part of the city I’m familiar with. Should be fun.
  • Alas, I’ll have to miss out on the final day of View Source, but with good reason. I’ll be heading from Berlin to Bologna for the excellent From The Front conference. Ah, I remember being at the very first one five years ago! I’ve made it back every second year since—I don’t need much of an excuse to go to Bologna, one of my favourite places …mostly because of the food.

The only downside to leaving town for this whirlwind tour is that there won’t be a Brighton Homebrew Website Club tomorrow. I feel bad about that—I had to cancel the one two weeks ago because I was too sick for it.

But on the plus side, when I get back, it won’t be long until Indie Web Camp Brighton on Saturday, September 24th and Sunday, September 25th. If you haven’t been to an Indie Web Camp before, you should really come along—it’s for anyone who has their own website, or wants to have their own website. If you have been to an Indie Web Camp before, you don’t need me to convince you to come along; you already know how good it is.

Sign up for Indie Web Camp Brighton here. It’s free and it’s a lot of fun.

The importance of owning your data is getting more awareness. To grow it and help people get started, we’re meeting for a bar-camp like collaboration in Brighton for two days of brainstorming, working, teaching, and helping.

Save the dates for Indie Web Camp Brighton 2016

September 24th and 25th—those are the dates you should put in your diary. That’s when this year’s Indie Web Camp Brighton is happening.

Once again it’ll be at 68 Middle Street, home to Clearleft. You can register for free now, and then add your name to the list of participants on the wiki.

If you haven’t been to an Indie Web Camp before, it’s a very straightforward proposition. The idea is that you should have your own website. That’s it. Every thing else is predicated on that. So while there’ll be plenty of discussions, demos, and designs, they’re all in service to that fundamental premise.

The first day of an Indie Web Camp is like a BarCamp. We make a schedule grid at the start of the day and people organise topics by room and time slot. It sounds chaotic. It is chaotic. But it works surprisingly well. The discussions can be about technologies, or interfaces, or ideas, or just about anything really.

The second day is for making. After the discussions from the previous day, most people will have a clear idea at this point for something they might want to do. It might involve adding some new technology to their website, or making some design changes, or helping build a tool. For people starting from scratch, this is the perfect time for them to build and launch a basic website.

At the end of the second day, everyone demos what they’ve done. I’m always amazed by how much people can accomplish in just one weekend. There’s something about having other people around to help you that makes it super productive.

You might be thinking “but I’m not a coder!” Don’t worry—there’ll be plenty of coders there so you can get their help on whatever you might decide to do. If you’re a designer, your skills will be in high demand by those coders. It’s that mish-mash of people that makes it such a fun gathering.

Last year’s Indie Web Camp Brighton was lots of fun. Let’s make Indie Web Camp Brighton 2016 even better!

Indie Web Camp Brighton group photo

Sticky headers

I made a little tweak to The Session today. The navigation bar across the top is “sticky” now—it doesn’t scroll with the rest of the content.

I made sure that the stickiness only kicks in if the screen is both wide and tall enough to warrant it. Vertical media queries are your friend!

But it’s not enough to just put some position: fixed CSS inside a media query. There are some knock-on effects that I needed to mitigate.

I use the space bar to paginate through long pages. It drives me nuts when sites with sticky headers don’t accommodate this. I made use of Tim Murtaugh’s sticky pagination fixer. It makes sure that page-jumping with the keyboard (using the space bar or page down) still works. I remember when I linked to this script two years ago, thinking “I bet this will come in handy one day.” Past me was right!

The other “gotcha!” with having a sticky header is making sure that in-page anchors still work. Nicolas Gallagher covers the options for this in a post called Jump links and viewport positioning. Here’s the CSS I ended up using:

:target:before {
    content: '';
    display: block;
    height: 3em;
    margin: -3em 0 0;
}

I also needed to check any of my existing JavaScript to see if I was using scrollTo anywhere, and adjust the calculations to account for the newly-sticky header.

Anyway, just a few things to consider if you’re going to make a navigational element “sticky”:

  1. Use min-height in your media query,
  2. Take care of keyboard-initiated page scrolling,
  3. Adjust the positioning of in-page links.

A little progress

I’ve got a fairly simple posting interface for my notes. A small textarea, an optional file upload, some checkboxes for syndicating to Twitter and Flickr, and a submit button.

Notes posting interface

It works fine although sometimes the experience of uploading a file isn’t great, especially if I’m on a slow connection out and about. I’ve been meaning to add some kind of Ajax-y progress type thingy for the file upload, but never quite got around to it. To be honest, I thought it would be a pain.

But then, in his excellent State Of The Gap hit parade of web technologies, Remy included a simple file upload demo. Turns out that all the goodies that have been added to XMLHttpRequest have made this kind of thing pretty easy (and I’m guessing it’ll be easier still once we have fetch).

I’ve made a little script that adds a progress bar to any forms that are POSTing data.

Feel free to use it, adapt it, and improve it. It isn’t using any ES6iness so there are some obvious candidates for improvement there.

It’s working a treat on my little posting interface. Now I can stare at a slowly-growing progress bar when I’m out and about on a slow connection.