Tags: ip

972

sparkline

Thursday, January 19th, 2017

Looking beyond launch

It’s all go, go, go at Clearleft while we’re working on a new version of our website …accompanied by a brand new identity. It’s an exciting time in the studio, tinged with the slight stress that comes with any kind of unveiling like this.

I think it’s good to remember that this is the web. I keep telling myself that we’re not unveiling something carved in stone. Even after the launch we can keep making the site better. In fact, if we wait until everything is perfect before we launch, we’ll probably never launch at all.

On the other hand, you only get one chance to make a first impression, right? So it’s got to be good …but it doesn’t have to be done. A website is never done.

I’ve got to get comfortable with that. There’s lots of things that I’d like to be done in time for launch, but realistically it’s fine if those things are completed in the subsequent days or weeks.

Adding a service worker and making a nice offline experience? I really want to do that …but it can wait.

What about other performance tweaks? Yes, we’ll to try have every asset—images, fonts—optimised …but maybe not from day one.

Making sure that each page has good metadata—Open Graph? Twitter Cards? Microformats? Maybe even AMP? Sure …but not just yet.

Having gorgeous animations? Again, I really want to have them but as Val rightly points out, animations are an enhancement—a really, really great enhancement.

If anything, putting the site live before doing all these things acts as an incentive to make sure they get done.

So when you see the new site, if you view source or run it through Web Page Test and spot areas for improvement, rest assured we’re on it.

Understanding the Critical Rendering Path

A nice and clear description of how browsers parse and render web pages.

Let them paste passwords - NCSC Site

Ever been on one of those websites that doesn’t allow you to paste into the password field? Frustrating, isn’t it? (Especially if you use a password manager.)

It turns out that nobody knows how this ever started. It’s like a cargo cult without any cargo.

Wednesday, January 11th, 2017

Making Resilient Web Design work offline

I’ve written before about taking an online book offline, documenting the process behind the web version of HTML5 For Web Designers. A book is quite a static thing so it’s safe to take a fairly aggressive offline-first approach. In fact, a static unchanging book is one of the few situations that AppCache works for. Of course a service worker is better, but until AppCache is removed from browsers (and until service worker is supported across the board), I’m using both. I wouldn’t recommend that for most sites though—for most sites, use a service worker to enhance it, and avoid AppCache like the plague.

For Resilient Web Design, I took a similar approach to HTML5 For Web Designers but I knew that there was a good chance that some of the content would be getting tweaked at least for a while. So while the approach is still cache-first, I decided to keep the cache fairly fresh.

Here’s my service worker. It starts with the usual stuff: when the service worker is installed, there’s a list of static assets to cache. In this case, that list is literally everything; all the HTML, CSS, JavaScript, and images for the whole site. Again, this is a pattern that works well for a book, but wouldn’t be right for other kinds of websites.

The real heavy lifting happens with the fetch event. This is where the logic sits for what the service worker should do everytime there’s a request for a resource. I’ve documented the logic with comments:

// Look in the cache first, fall back to the network
  // CACHE
  // Did we find the file in the cache?
      // If so, fetch a fresh copy from the network in the background
      // NETWORK
          // Stash the fresh copy in the cache
  // NETWORK
  // If the file wasn't in the cache, make a network request
      // Stash a fresh copy in the cache in the background
  // OFFLINE
  // If the request is for an image, show an offline placeholder
  // If the request is for a page, show an offline message

So my order of preference is:

  1. Try the cache first,
  2. Try the network second,
  3. Fallback to a placeholder as a last resort.

Leaving aside that third part, regardless of whether the response is served straight from the cache or from the network, the cache gets a top-up. If the response is being served from the cache, there’s an additional network request made to get a fresh copy of the resource that was just served. This means that the user might be seeing a slightly stale version of a file, but they’ll get the fresher version next time round.

Again, I think this acceptable for a book where the tweaks and changes should be fairly minor, but I definitely wouldn’t want to do it on a more dynamic site where the freshness matters more.

Here’s what it usually likes like when a file is served up from the cache:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      return responseFromCache;
  }

I’ve introduced an extra step where the fresher version is fetched from the network. This is where the code can look a bit confusing: the network request is happening in the background after the cached file has already been returned, but the code appears before the return statement:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      // If so, fetch a fresh copy from the network in the background
      event.waitUntil(
          // NETWORK
          fetch(request)
          .then( responseFromFetch => {
              // Stash the fresh copy in the cache
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      return responseFromCache;
  }

It’s asynchronous, see? So even though all that network code appears before the return statement, it’s pretty much guaranteed to complete after the cache response has been returned. You can verify this by putting in some console.log statements:

caches.match(request)
.then( responseFromCache => {
  if (responseFromCache) {
      event.waitUntil(
          fetch(request)
          .then( responseFromFetch => {
              console.log('Got a response from the network.');
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      console.log('Got a response from the cache.');
      return responseFromCache;
  }

Those log statements will appear in this order:

Got a response from the cache.
Got a response from the network.

That’s the opposite order in which they appear in the code. Everything inside the event.waitUntil part is asynchronous.

Here’s the catch: this kind of asynchronous waitUntil hasn’t landed in all the browsers yet. The code I’ve written will fail.

But never fear! Jake has written a polyfill. All I need to do is include that at the start of my serviceworker.js file and I’m good to go:

// Import Jake's polyfill for async waitUntil
importScripts('/js/async-waituntil.js');

I’m also using it when a file isn’t found in the cache, and is returned from the network instead. Here’s what the usual network code looks like:

fetch(request)
  .then( responseFromFetch => {
    return responseFromFetch;
  })

I want to also store that response in the cache, but I want to do it asynchronously—I don’t care how long it takes to put the file in the cache as long as the user gets the response straight away.

Technically, I’m not putting the response in the cache; I’m putting a copy of the response in the cache (it’s a stream, so I need to clone it if I want to do more than one thing with it).

fetch(request)
  .then( responseFromFetch => {
    // Stash a fresh copy in the cache in the background
    let responseCopy = responseFromFetch.clone();
    event.waitUntil(
      caches.open(staticCacheName)
      .then( cache => {
          cache.put(request, responseCopy);
      })
    );
    return responseFromFetch;
  })

That all seems to be working well in browsers that support service workers. For legacy browsers, like Mobile Safari, there’s the much blunter caveman logic of an AppCache manifest.

Here’s the JavaScript that decides whether a browser gets the service worker or the AppCache:

if ('serviceWorker' in navigator) {
  // If service workers are supported
  navigator.serviceWorker.register('/serviceworker.js');
} else if ('applicationCache' in window) {
  // Otherwise inject an iframe to use appcache
  var iframe = document.createElement('iframe');
  iframe.setAttribute('src', '/appcache.html');
  iframe.setAttribute('style', 'width: 0; height: 0; border: 0');
  document.querySelector('footer').appendChild(iframe);
}

Either way, people are making full use of the offline nature of the book and that makes me very happy indeed.

ryanmcdermott/clean-code-javascript: Clean Code concepts adapted for JavaScript

This looks a sensible approach to writing clean JavaScript.

Tuesday, January 10th, 2017

Mentorship, From the Notebook of Aaron Gustafson

Here’s a great opportunity for somebody looking to level up in web development—mentorship from the one and only Aaron Gustafson.

Thursday, January 5th, 2017

What AMP (Maybe) Means for News Developers - Features - Source: An OpenNews project

So if AMP is useful it’s because it raises the stakes. If we (news developers) don’t figure out faster ways to load our pages for readers, then we’re going to lose a lot of magic.

A number of developers answered questions on the potential effects of Google’s AMP project. This answer resonates a lot with my own feelings:

AMP is basically web performance best practices dressed up as a file format. That’s a very clever solution to what is, at heart, a cultural problem: when management (in one form or another) comes to the CMS team at a news organization and asks to add more junk to the site, saying “we can’t do that because AMP” is a much more powerful argument than trying to explain why a pop-over “Like us on Facebook!” modal is driving our readers to drink.

But the danger is that AMP turns into a long-term “solution” instead of a stop-gap:

So in a sense, the best possible outcome is that AMP is disruptive enough to shake the boardroom into understanding the importance of performance in platform decisions (and making the hard business decisions this demands), but that developers are allowed to implement those decisions in standard HTML instead of adding yet another delivery format to their export pipeline.

The ideal situation looks a lot more like Tim’s proposal:

I would be much more pleased with AMP if it was a spec for Google’s best-practice recommendations rather than effectively a new non-standard format. By using standard HTML/CSS/JS as the building blocks, they’re starting on the right foot, but the reliance on a Google-decreed AMP JavaScript library, use of separate AMP-specific URLs, and encouragement to use a Google-provided CDN are all worrying aspects.

Tuesday, January 3rd, 2017

Does Google execute JavaScript? | Stephan Boyer

Google may or may not decide to run your JavaScript, and you don’t want your business to depend on its particular inclination of the day. Do server-side/universal/isomorphic rendering just to be safe.

Wednesday, December 28th, 2016

What you need to know about using VPN in the UK – By Andy Parker

If you’re prepping your defences against the snooper’s charter (and you/I should be), Andy recommend using NordVPN.

Saturday, December 24th, 2016

10 things I learned making the fastest site in the world

Behind the amusing banter there’s some really solid performance advice in here. Good stuff.

Client Side Rendering (CSR), or as I call it “setting money on fire and throwing it in a river” has its uses, but for this site would have been madness.

Monday, December 19th, 2016

BAgel

The styleguide, design principles, and pattern library for British Airways. It’s the “global experience language” for BA …so it’s called BAgel.

Deep linking with fragmentions

When I was marking up Resilient Web Design I wanted to make sure that people could link to individual sections within a chapter. So I added IDs to all the headings. There’s no UI to expose that though—like the hover pattern that some sites use to show that something is linkable—so unless you know the IDs are there, there’s no way of getting at them other than “view source.”

But if you’re reading a passage in Resilient Web Design and you highlight some text, you’ll notice that the URL updates to include that text after a hash symbol. If that updated URL gets shared, then anyone following it should be sent straight to that string of text within the page. That’s fragmentions in action:

Fragmentions find the first matching word or phrase in a document and focus its closest surrounding element. The match is determined by the case-sensitive string following the # (or ## double-hash)

It’s a similar idea to Eric and Simon’s proposal to use CSS selectors as fragment identifiers, but using plain text instead. You can find out more about the genesis of fragmentions from Kevin. I’m using Jonathon Neal’s script with some handy updates from Matthew.

I’m using the fragmention support to power the index of the book. It relies on JavaScript to work though, so Matthew has come to the rescue again and created a version of the site with IDs for each item linked from the index (I must get around to merging that).

The fragmention functionality is ticking along nicely with one problem…

I’ve tweaked the typography of Resilient Web Design to within an inch of its life, including a crude but effective technique to avoid widowed words at the end of a paragraph. The last two words of every paragraph are separated by a UTF-8 no-break space character instead of a regular space.

That solves the widowed words problem, but it confuses the fragmention script. Any selected text that includes the last two words of a paragraph fails to match. I’ve tried tweaking the script, but I’m stumped. If you fancy having a go, please have at it.

Update: And fixed! Thanks to Lee.

Thursday, December 15th, 2016

Design principles

Andrew Travers wrote about designing design principles at Co-op Digital. I’m somewhat obsessed with design principles—hence my collection—so I’m also obsessed with figuring out what makes for “good” design principles.

One of my favourite design principles (yes, I have favourites) is from the HTML Design Principles. It’s the priority of constituencies:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

The emphasis my own. It demonstrates how the design principle can be put to use (“in case of conflict”). Andrew also describes uses for the design principles they’re putting together:

What we’re building towards is a set of principles, few enough to be memorable, short enough to be repeatable, relevant enough to be usable. When we’re running a design crit, it’s these principles that we want to lean on. When a sole designer in an agile delivery team is talking about a design approach, it’s these principles that back her up.

Those sound like good use cases to me. Those are situations when design principles can help people reach agreement on priorities, without it having to be about ego or who shouts loudest.

I think it was from Cennydd that I heard about a really good test of a design principle: is it reversible? In other words, could you imagine the exact opposite of the design principle being perfectly valid in a different organisation or on a different project? If not, then the principle may be too weak to be effective. (Cennydd points out that he heard this from Jared who has written a lot about evaluating design principles.)

“Make it easy to use” would be an example of a weak design principle. It’s hard to imagine a situation where “make it hard to use” would be a reasonable guiding principle.

Frankly, there are plenty of “bad” examples in my collection of design principles. Many of them wouldn’t pass the reversibility test. Just recently though, I spotted some that would pass the test with flying colours. They weren’t even labelled as design principles—they’re the tips that Heydon includes at the end of his excellent 24 Ways article on inclusive design:

  • Involve code early
  • Respect conventions
  • Don’t be exact
  • Enforce simplicity

I could easily imagine endeavours where the complete opposite of those tips would be valued. Personally, I think they’re really great design princples.

I should add them to the list.

Wednesday, December 14th, 2016

Progressive enhancement and team memberships

A really nice pattern, similar to one I wrote about a little while back. There’s also this little gem of an observation:

Progressive enhancement is also well-suited to Agile, as you can start with the core functionality and then iterate.

Thursday, December 8th, 2016

Server Side React

Remy wants to be able to apply progressive enhancement to React: server-side and client-side rendering, sharing the same codebase. He succeeded, but…

In my opinion, an individual or a team starting out, or without the will, aren’t going to use progressive enhancement in their approach to applying server side rendering to a React app. I don’t believe this is by choice, I think it’s simply because React lends itself so strongly to client-side, that I can see how it’s easy to oversee how you could start on the server and progressive enhance upwards to a rich client side experience.

I’m hopeful that future iterations of React will make this a smoother option.

What the Heck Is Inclusive Design? ◆ 24 ways

I really, really like Heydon’s framing of inclusive design: yes, it covers accessibility, but it’s more than that, and it’s subtly different to universal design.

He also includes some tips which read like design principles to me:

  • Involve code early
  • Respect conventions
  • Don’t be exact
  • Enforce simplicity

Come to think of it, they’re really good design principles in that they are all reversible i.e. you could imagine the polar opposites being design principles elsewhere.

Wednesday, December 7th, 2016

Is JavaScript more fragile? – Baldur Bjarnason

Progressive enhancement’s core value proposition, for me, is that HTML and CSS have features that are powerful in their own right. Using HTML, CSS, and JavaScript together makes for more reliable products than just using Javascript alone in a single-page-app.

This philosophy doesn’t apply to every website out there, but it sure as hell applies to a lot of them.

Monday, December 5th, 2016

KUTE.js | Javascript Animation Engine

This looks like an interesting little JavaScript library for scripting animations.

Sunday, December 4th, 2016

Service Worker, what are you? - Mariko Kosaka

This is a fun—and accurate—explanation of service workers.

There’s definitely something “alien” about a service worker—it’s kind of like a virus that gets installed on the user’s device. I’ve taken to describing it as “a man-in-the-middle attack on your own website” which makes sound a bit scarier than is necessary.

Saturday, November 26th, 2016

kottke.org memberships

I have so much admiration for Jason Kottke’s dedication (or sheer bloodymindedness)—he’s been diligently writing and sharing weird and wonderful stuff on his own website for so long. I’m more than happy to support him in that.