Tags: script

852

sparkline

Thursday, December 6th, 2018

Big ol’ Ball o’ JavaScript | Brad Frost

Backend logic? JavaScript. Styles? We do that in JavaScript now. Markup? JavaScript. Anything else? JavaScript.

Historically, different languages suggested different roles. “This language does style.” “This language does structure.” But now it’s “This JavaScript does style.” “This JavaScript does structure.” “This JavaScript does database queries.”

Mistletoe Offline

This article first appeared in 24 Ways, the online advent calendar for geeks.

It’s that time of year, when we gather together as families to celebrate the life of the greatest person in history. This man walked the Earth long before us, but he left behind words of wisdom. Those words can guide us every single day, but they are at the forefront of our minds during this special season.

I am, of course, talking about Murphy, and the golden rule he gave unto us:

Anything that can go wrong will go wrong.

So true! I mean, that’s why we make sure we’ve got nice 404 pages. It’s not that we want people to ever get served a File Not Found message, but we acknowledge that, despite our best efforts, it’s bound to happen sometime. Murphy’s Law, innit?

But there are some Murphyesque situations where even your lovingly crafted 404 page won’t help. What if your web server is down? What if someone is trying to reach your site but they lose their internet connection? These are all things than can—and will—go wrong.

I guess there’s nothing we can do about those particular situations, right?

Wrong!

A service worker is a Murphy-battling technology that you can inject into a visitor’s device from your website. Once it’s installed, it can intercept any requests made to your domain. If anything goes wrong with a request—as is inevitable—you can provide instructions for the browser. That’s your opportunity to turn those server outage frowns upside down. Take those network connection lemons and make network connection lemonade.

If you’ve got a custom 404 page, why not make a custom offline page too?

Get your server in order

Step one is to make …actually, wait. There’s a step before that. Step zero. Get your site running on HTTPS, if it isn’t already. You won’t be able to use a service worker unless everything’s being served over HTTPS, which makes sense when you consider the awesome power that a service worker wields.

If you’re developing locally, service workers will work fine for localhost, even without HTTPS. But for a live site, HTTPS is a must.

Make an offline page

Alright, assuming your site is being served over HTTPS, then step one is to create an offline page. Make it as serious or as quirky as is appropriate for your particular brand. If the website is for a restaurant, maybe you could put the telephone number and address of the restaurant on the custom offline page (unsolicited advice: you could also put this on the home page, you know). Here’s an example of the custom offline page for this year’s Ampersand conference.

When you’re done, publish the offline page at suitably imaginative URL, like, say /offline.html.

Pre-cache your offline page

Now create a JavaScript file called serviceworker.js. This is the script that the browser will look to when certain events are triggered. The first event to handle is what to do when the service worker is installed on the user’s device. When that happens, an event called install is fired. You can listen out for this event using addEventListener:

addEventListener('install', installEvent => {
// put your instructions here.
}); // end addEventListener

In this case, you want to make sure that your lovingly crafted custom offline page is put into a nice safe cache. You can use the Cache API to do this. You get to create as many caches as you like, and you can call them whatever you want. Here, I’m going to call the cache Johnny just so I can refer to it as JohnnyCache in the code:

addEventListener('install', installEvent => {
  installEvent.waitUntil(
    caches.open('Johnny')
    .then( JohnnyCache => {
      JohnnyCache.addAll([
       '/offline.html'
      ]); // end addAll
     }) // end open.then
  ); // end waitUntil
}); // end addEventListener

I’m betting that your lovely offline page is linking to a CSS file, maybe an image or two, and perhaps some JavaScript. You can cache all of those at this point:

addEventListener('install', installEvent => {
  installEvent.waitUntil(
    caches.open('Johnny')
    .then( JohnnyCache => {
      JohnnyCache.addAll([
       '/offline.html',
       '/path/to/stylesheet.css',
       '/path/to/javascript.js',
         '/path/to/image.jpg'
      ]); // end addAll
     }) // end open.then
  ); // end waitUntil
}); // end addEventListener

Make sure that the URLs are correct. If just one of the URLs in the list fails to resolve, none of the items in the list will be cached.

Intercept requests

The next event you want to listen for is the fetch event. This is probably the most powerful—and, let’s be honest, the creepiest—feature of a service worker. Once it has been installed, the service worker lurks on the user’s device, waiting for any requests made to your site. Every time the user requests a web page from your site, a fetch event will fire. Every time that page requests a style sheet or an image, a fetch event will fire. You can provide instructions for what should happen each time:

addEventListener('fetch', fetchEvent => {
// What happens next is up to you!
}); // end addEventListener

Let’s write a fairly conservative script with the following logic:

  • Whenever a file is requested,
  • First, try to fetch it from the network,
  • But if that doesn’t work, try to find it in the cache,
  • But if that doesn’t work, and it’s a request for a web page, show the custom offline page instead.

Here’s how that translates into JavaScript:

// Whenever a file is requested
addEventListener('fetch', fetchEvent => {
  const request = fetchEvent.request;
  fetchEvent.respondWith(
    // First, try to fetch it from the network
    fetch(request)
    .then( responseFromFetch => {
      return responseFromFetch;
    }) // end fetch.then
    // But if that doesn't work
    .catch( fetchError => {
      // try to find it in the cache
      caches.match(request)
      .then( responseFromCache => {
        if (responseFromCache) {
         return responseFromCache;
       // But if that doesn't work
       } else {
         // and it's a request for a web page
         if (request.headers.get('Accept').includes('text/html')) {
           // show the custom offline page instead
           return caches.match('/offline.html');
         } // end if
       } // end if/else
     }) // end match.then
   }) // end fetch.catch
  ); // end respondWith
}); // end addEventListener

I am fully aware that I may have done some owl-drawing there. If you need a more detailed breakdown of what’s happening at each point in the code, I’ve written a whole book for you. It’s the perfect present for Murphymas.

Hook up your service worker script

You can publish your service worker script at /serviceworker.js but you still need to tell the browser where to look for it. You can do that using JavaScript. Put this in an existing JavaScript file that you’re calling in to every page on your site, or add this in a script element at the end of every page’s HTML:

if (navigator.serviceWorker) {
  navigator.serviceWorker.register('/serviceworker.js');
}

That tells the browser to start installing the service worker, but not without first checking that the browser understands what a service worker is. When it comes to JavaScript, feature detection is your friend.

You might already have some JavaScript files in a folder like /assets/js/ and you might be tempted to put your service worker script in there too. Don’t do that. If you do, the service worker will only be able to handle requests made to for files within /assets/js/. By putting the service worker script in the root directory, you’re making sure that every request can be intercepted.

Go further!

Nicely done! You’ve made sure that if—no, when—a visitor can’t reach your website, they’ll get your hand-tailored offline page. You have temporarily defeated the forces of chaos! You have briefly fought the tide of entropy! You have made a small but ultimately futile gesture against the inevitable heat-death of the universe!

This is just the beginning. You can do more with service workers.

What if, every time you fetched a page from the network, you stored a copy of that page in a cache? Then if that person tries to reach that page later, but they’re offline, you could show them the cached version.

Or, what if instead of reaching out the network first, you checked to see if a file is in the cache first? You could serve up that cached version—which would be blazingly fast—and still fetch a fresh version from the network in the background to pop in the cache for next time. That might be a good strategy for images.

So many options! The hard part isn’t writing the code, it’s figuring out the steps you want to take. Once you’ve got those steps written out, then it’s a matter of translating them into JavaScript.

Inevitably there will be some obstacles along the way—usually it’s a misplaced curly brace or a missing parenthesis. Don’t be too hard on yourself if your code doesn’t work at first. That’s just Murphy’s Law in action.

Wednesday, December 5th, 2018

Going Offline First (Video Series)

A five-part video series from Ire on how she built the “save for offline” functionality on her site.

The first one is about getting a set set up on Ghost so you can probably safely skip that one and go straight to the second video to get down to the nitty-gritty of the Cache API and service workers.

Tuesday, December 4th, 2018

Mistletoe Offline ◆ 24 ways

They let me write a 24 Ways article again. Will they never learn?

This one’s a whirlwind tour of using a service worker to provide a custom offline page, in the style of Going Offline.

By the way, just for the record, I initially rejected this article’s title out of concern that injecting a Cliff Richard song into people’s brains was cruel and unusual punishment. I was overruled.

Monday, December 3rd, 2018

Reluctant Gatekeeping: The Problem With Full Stack | HeydonWorks

The value you want form a CSS expert is their CSS, not their JavaScript, so it’s absurd to make JavaScript a requirement.

Absolutely spot on! And it cuts both ways:

Put CSS in JS and anyone who wishes to write CSS now has to know JavaScript. Not just JavaScript, but —most likely—the specific ‘flavor’ of JavaScript called React. That’s gatekeeping, first of all, but the worst part is the JavaScript aficionado didn’t want CSS on their plate in the first place.

Wednesday, November 28th, 2018

Just markup | justmarkup

Telling other people working on the web and doing a great job building web sites that they are useless because they focus on HTML and CSS is very wrong.

Saturday, November 24th, 2018

PushAPI without Notifications | Seblog

Remember when I wrote about using push without notifications? Sebastiaan has written up the details of the experiment he conducted at Indie Web Camp Berlin.

Friday, November 23rd, 2018

When to use CSS vs. JavaScript | Go Make Things

Chris Ferdinandi has a good rule of thumb:

If something I want to do with JavaScript can be done with CSS instead, use CSS.

Makes sense, given their differing error-handling models:

A JavaScript error can bring all of the JS on a page to screeching halt. Mistype a CSS property or miss a semicolon? The browser just skips the property and moves on. Use an unsupported feature? Same thing.

But he also cautions against going too far with CSS. Anything to do with state should be done with JavaScript:

If the item requires interaction from the user, use JavaScript (things like hovering, focusing, clicking, etc.).

‘Sfunny; I remember when we got pseudo-classes, I wrote a somewhat tongue-in-cheek post called :hover Considered Harmful:

Presentation and behaviour… the twain have met, the waters are muddied, the issues are confused.

Tuesday, November 20th, 2018

Web workers vs Service workers vs Worklets

A great primer by Ire:

Web workers, service workers, and worklets are all scripts that run on a separate thread. So what are the differences between these three types of workers?

Tuesday, November 13th, 2018

Optimise without a face

I’ve been playing around with the newly-released Squoosh, the spiritual successor to Jake’s SVGOMG. You can drag images into the browser window, and eyeball the changes that any optimisations might make.

On a project that Cassie is working on, it worked really well for optimising some JPEGs. But there were a few images that would require a bit more fine-grained control of the optimisations. Specifically, pictures with human faces in them.

I’ve written about this before. If there’s a human face in image, I open that image in a graphics editing tool like Photoshop, select everything but the face, and add a bit of blur. Because humans are hard-wired to focus on faces, we’ll notice any jaggy artifacts on a face, but we’re far less likely to notice jagginess in background imagery: walls, materials, clothing, etc.

On the face of it (hah!), a browser-based tool like Squoosh wouldn’t be able to optimise for faces, but then Cassie pointed out something really interesting…

When we were both at FFConf on Friday, there was a great talk by Eleanor Haproff on machine learning with JavaScript. It turns out there are plenty of smart toolkits out there, and one of them is facial recognition. So I wonder if it’s possible to build an in-browser tool with this workflow:

  • Drag or upload an image into the browser window,
  • A facial recognition algorithm finds any faces in the image,
  • Those portions of the image remain crisp,
  • The rest of the image gets a slight blur,
  • Download the optimised image.

Maybe the selecting/blurring part would need canvas? I don’t know.

Anyway, I thought this was a brilliant bit of synthesis from Cassie, and now I’ve got two questions:

  1. Does this exist yet? And, if not,
  2. Does anyone want to try building it?

Monday, October 22nd, 2018

33 Concepts Every JavaScript Developer Should Know

Please ignore the hyperbolic linkbaity title designed to stress you out and make you feel inadequate. This is a handy listing of links to lots of JavaScript resources, grouped by topic …some of which you could know.

Thursday, October 11th, 2018

Why do people decide to use frameworks?

Some sensible answers to this question here…

…of which, exactly zero mention end users.

Monday, October 8th, 2018

Notes on prototyping – Ben Frain

Good tips on prototyping using the very materials that the final product will be built in—HTML, CSS, and JavaScript.

The only thing I would add is that, in my experience, it’s vital that the prototype does not morph into the final product …no matter how tempting it sometimes seems.

Prototypes are made to be discarded (having validated or invalidated an idea). Making a prototype and making something for production require very different mindsets: with prototyping it’s all about speed of creation; with production work, it’s all about quality of execution.

The Hurricane Web | Max Böck - Frontend Web Developer

When a storm comes, some of the big news sites like CNN and NPR strip down to a zippy performant text-only version that delivers the content without the bells and whistles.

I’d argue though that in some aspects, they are actually better than the original.

The numbers:

The “full” NPR site in comparison takes ~114 requests and weighs close to 3MB on average. Time to first paint is around 20 seconds on slow connections. It includes ads, analytics, tracking scripts and social media widgets.

Meanwhile, the actual news content is roughly the same.

I quite like the idea of storm-driven development.

…websites built for a storm do not rely on Javascript. The benefit simply does not outweigh the cost. They rely on resilient HTML, because that’s all that is really necessary here.

Thursday, October 4th, 2018

Declaration

I like the robustness that comes with declarative languages. I also like the power that comes with imperative languages. Best of all, I like having the choice.

Take the video and audio elements, for example. If you want, you can embed a video or audio file into a web page using a straightforward declaration in HTML.

<audio src="..." controls><!-- fallback goes here --></audio>

Straightaway, that covers 80%-90% of use cases. But if you need to do more—like, provide your own custom controls—there’s a corresponding API that’s exposed in JavaScript. Using that API, you can do everything that you can do with the HTML element, and a whole lot more besides.

It’s a similar story with animation. CSS provides plenty of animation power, but it’s limited in the events that can trigger the animations. That’s okay. There’s a corresponding JavaScript API that gives you more power. Again, the CSS declarations cover 80%-90% of use cases, but for anyone in that 10%-20%, the web animation API is there to help.

Client-side form validation is another good example. For most us, the HTML attributes—required, type, etc.—are probably enough most of the time.

<input type="email" required />

When we need more fine-grained control, there’s a validation API available in JavaScript (yes, yes, I know that the API itself is problematic, but you get the point).

I really like this design pattern. Cover 80% of the use cases with a declarative solution in HTML, but also provide an imperative alternative in JavaScript that gives more power. HTML5 has plenty of examples of this pattern. But I feel like the history of web standards has a few missed opportunities too.

Geolocation is a good example of an unbalanced feature. If you want to use it, you must use JavaScript. There is no declarative alternative. This doesn’t exist:

<input type="geolocation" />

That’s a shame. Anyone writing a form that asks for the user’s location—in order to submit that information to a server for processing—must write some JavaScript. That’s okay, I guess, but it’s always going to be that bit more fragile and error-prone compared to markup.

(And just in case you’re thinking of the fallback—which would be for the input element to be rendered as though its type value were text—and you think it’s ludicrous to expect users with non-supporting browsers to enter latitude and longitude coordinates by hand, I direct your attention to input type="color": in non-supporting browsers, it’s rendered as input type="text" and users are expected to enter colour values by hand.)

Geolocation is an interesting use case because it only works on HTTPS. There are quite a few JavaScript APIs that quite rightly require a secure context—like service workers—but I can’t think of a single HTML element or attribute that requires HTTPS (although that will soon change if we don’t act to stop plans to create a two-tier web). But that can’t have been the thinking behind geolocation being JavaScript only; when geolocation first shipped, it was available over HTTP connections too.

Anyway, that’s just one example. Like I said, it’s not that I’m in favour of declarative solutions instead of imperative ones; I strongly favour the choice offered by providing declarative solutions as well as imperative ones.

In recent years there’s been a push to expose low-level browser features to developers. They’re inevitably exposed as JavaScript APIs. In most cases, that makes total sense. I can’t really imagine a declarative way of accessing the fetch or cache APIs, for example. But I think we should be careful that it doesn’t become the only way of exposing new browser features. I think that, wherever possible, the design pattern of exposing new features declaratively and imperatively offers the best of the both worlds—ease of use for the simple use cases, and power for the more complex use cases.

Previously, it was up to browser makers to think about these things. But now, with the advent of web components, we developers are gaining that same level of power and responsibility. So if you’re making a web component that you’re hoping other people will also use, maybe it’s worth keeping this design pattern in mind: allow authors to configure the functionality of the component using HTML attributes and JavaScript methods.

Friday, September 28th, 2018

DasSur.ma – The 9am rush hour

Why JavaScript is a real performance bottleneck:

Rush hour. The worst time of day to travel. For many it’s not possible to travel at any other time of day because they need to get to work by 9am.

This is exactly what a lot of web code looks like today: everything runs on a single thread, the main thread, and the traffic is bad. In fact, it’s even more extreme than that: there’s one lane all from the city center to the outskirts, and quite literally everyone is on the road, even if they don’t need to be at the office by 9am.

Thoughts on Offline-first | Trys Mudford

Service Workers have such huge potential power, and I feel like we (developers on the web) have barely scratched the surface with what’s possible.

Needless to say, I couldn’t agree more!

Trys is thinking through some of the implicatons of service workers, like how we refresh stale content, and how we deal with slow networks—something that’s actually more of a challenge than dealing with no network connection at all.

There’s some good food for thought here.

I’m so excited to see how we can use Service Workers to improve the web.

Monday, September 24th, 2018

Salty JavaScript analogy - HankChizlJaw

JavaScript is like salt. If you add just enough salt to a dish, it’ll help make the flavour awesome. Add too much though, and you’ll completely ruin it.

Sunday, September 23rd, 2018

Service workers in Samsung Internet browser

I was getting reports of some odd behaviour with the service worker on thesession.org, the Irish music website I run. Someone emailed me to say that they kept getting the offline page, even when their internet connection was perfectly fine and the site was up and running.

They didn’t mind answering my pestering follow-on questions to isolate the problem. They told me that they were using the Samsung Internet browser on Android. After a little searching, I found this message on a Github thread about using waitUntil. It’s from someone who works on the Samsung Internet team:

Sadly, the asynchronos waitUntil() is not implemented yet in our browser. Yes, we will implement it but our release cycle is so far. So, for a long time, we might not resolve the issue.

A-ha! That explains the problem. See, here’s the pattern I was using:

  1. When someone requests a file,
  2. fetch that file from the network,
  3. create a copy of the file and cache it,
  4. return the contents.

Step 1 is the event listener:

// 1. When someone requests a file
addEventListener('fetch', fetchEvent => {
  let request = fetchEvent.request;
  fetchEvent.respondWith(

Steps 2, 3, and 4 are inside that respondWith:

// 2. fetch that file from the network
fetch(request)
.then( responseFromFetch => {
  // 3. create a copy of the file and cache it
  let copy = responseFromFetch.clone();
  caches.open(cacheName)
  .then( cache => {
    cache.put(request, copy);
  })
  // 4. return the contents.
  return responseFromFetch;
})

Step 4 might well complete while step 3 is still running (remember, everything in a service worker script is asynchronous so even though I’ve written out the steps sequentially, you never know what order the steps will finish in). That’s why I’m wrapping that third step inside fetchEvent.waitUntil:

// 2. fetch that file from the network
fetch(request)
.then( responseFromFetch => {
  // 3. create a copy of the file and cache it
  let copy = responseFromFetch.clone();
  fetchEvent.waitUntil(
    caches.open(cacheName)
    .then( cache => {
      cache.put(request, copy);
    })
  );
  // 4. return the contents.
  return responseFromFetch;
})

If a browser (like Samsung Internet) doesn’t understand the bit where I say fetchEvent.waitUntil, then it will throw an error and execute the catch clause. That’s where I have my fifth and final step: “try looking in the cache instead, but if that fails, show the offline page”:

.catch( fetchError => {
  console.log(fetchError);
  return caches.match(request)
  .then( responseFromCache => {
    return responseFromCache || caches.match('/offline');
  });
})

Normally in this kind of situation, I’d use feature detection to check whether a browser understands a particular API method. But it’s a bit tricky to test for support for asynchronous waitUntil. That’s okay. I can use a try/catch statement instead. Here’s what my revised code looks like:

fetch(request)
.then( responseFromFetch => {
  let copy = responseFromFetch.clone();
  try {
    fetchEvent.waitUntil(
      caches.open(cacheName)
      .then( cache => {
        cache.put(request, copy);
      })
    );
  } catch (error) {
    console.log(error);
  }
  return responseFromFetch;
})

Now I’ve managed to localise the error. If a browser doesn’t understand the bit where I say fetchEvent.waitUntil, it will execute the code in the catch clause, and then carry on as usual. (I realise it’s a bit confusing that there are two different kinds of catch clauses going on here: on the outside there’s a .then()/.catch() combination; inside is a try{}/catch{} combination.)

At some point, when support for async waitUntil statements is universal, this precautionary measure won’t be needed, but for now wrapping them inside try doesn’t do any harm.

There are a few places in chapter five of Going Offline—the chapter about service worker strategies—where I show examples using async waitUntil. There’s nothing wrong with the code in those examples, but if you want to play it safe (especially while Samsung Internet doesn’t support async waitUntil), feel free to wrap those examples in try/catch statements. But I’m not going to make those changes part of the errata for the book. In this case, the issue isn’t with the code itself, but with browser support.

Thursday, September 20th, 2018

The costs and benefits of tracking scripts – business vs. user // Sebastian Greger

I am having a hard time seeing the business benefits weighing in more than the user cost (at least for those many organisations out there who rarely ever put that data to proper use). After all, keeping the costs low for the user should be in the core interest of the business as well.