Tags: script

835

sparkline

Monday, September 24th, 2018

Salty JavaScript analogy - HankChizlJaw

JavaScript is like salt. If you add just enough salt to a dish, it’ll help make the flavour awesome. Add too much though, and you’ll completely ruin it.

Sunday, September 23rd, 2018

Service workers in Samsung Internet browser

I was getting reports of some odd behaviour with the service worker on thesession.org, the Irish music website I run. Someone emailed me to say that they kept getting the offline page, even when their internet connection was perfectly fine and the site was up and running.

They didn’t mind answering my pestering follow-on questions to isolate the problem. They told me that they were using the Samsung Internet browser on Android. After a little searching, I found this message on a Github thread about using waitUntil. It’s from someone who works on the Samsung Internet team:

Sadly, the asynchronos waitUntil() is not implemented yet in our browser. Yes, we will implement it but our release cycle is so far. So, for a long time, we might not resolve the issue.

A-ha! That explains the problem. See, here’s the pattern I was using:

  1. When someone requests a file,
  2. fetch that file from the network,
  3. create a copy of the file and cache it,
  4. return the contents.

Step 1 is the event listener:

// 1. When someone requests a file
addEventListener('fetch', fetchEvent => {
  let request = fetchEvent.request;
  fetchEvent.respondWith(

Steps 2, 3, and 4 are inside that respondWith:

// 2. fetch that file from the network
fetch(request)
.then( responseFromFetch => {
  // 3. create a copy of the file and cache it
  let copy = responseFromFetch.clone();
  caches.open(cacheName)
  .then( cache => {
    cache.put(request, copy);
  })
  // 4. return the contents.
  return responseFromFetch;
})

Step 4 might well complete while step 3 is still running (remember, everything in a service worker script is asynchronous so even though I’ve written out the steps sequentially, you never know what order the steps will finish in). That’s why I’m wrapping that third step inside fetchEvent.waitUntil:

// 2. fetch that file from the network
fetch(request)
.then( responseFromFetch => {
  // 3. create a copy of the file and cache it
  let copy = responseFromFetch.clone();
  fetchEvent.waitUntil(
    caches.open(cacheName)
    .then( cache => {
      cache.put(request, copy);
    })
  );
  // 4. return the contents.
  return responseFromFetch;
})

If a browser (like Samsung Internet) doesn’t understand the bit where I say fetchEvent.waitUntil, then it will throw an error and execute the catch clause. That’s where I have my fifth and final step: “try looking in the cache instead, but if that fails, show the offline page”:

.catch( fetchError => {
  console.log(fetchError);
  return caches.match(request)
  .then( responseFromCache => {
    return responseFromCache || caches.match('/offline');
  });
})

Normally in this kind of situation, I’d use feature detection to check whether a browser understands a particular API method. But it’s a bit tricky to test for support for asynchronous waitUntil. That’s okay. I can use a try/catch statement instead. Here’s what my revised code looks like:

fetch(request)
.then( responseFromFetch => {
  let copy = responseFromFetch.clone();
  try {
    fetchEvent.waitUntil(
      caches.open(cacheName)
      .then( cache => {
        cache.put(request, copy);
      })
    );
  } catch (error) {
    console.log(error);
  }
  return responseFromFetch;
})

Now I’ve managed to localise the error. If a browser doesn’t understand the bit where I say fetchEvent.waitUntil, it will execute the code in the catch clause, and then carry on as usual. (I realise it’s a bit confusing that there are two different kinds of catch clauses going on here: on the outside there’s a .then()/.catch() combination; inside is a try{}/catch{} combination.)

At some point, when support for async waitUntil statements is universal, this precautionary measure won’t be needed, but for now wrapping them inside try doesn’t do any harm.

There are a few places in chapter five of Going Offline—the chapter about service worker strategies—where I show examples using async waitUntil. There’s nothing wrong with the code in those examples, but if you want to play it safe (especially while Samsung Internet doesn’t support async waitUntil), feel free to wrap those examples in try/catch statements. But I’m not going to make those changes part of the errata for the book. In this case, the issue isn’t with the code itself, but with browser support.

Thursday, September 20th, 2018

The costs and benefits of tracking scripts – business vs. user // Sebastian Greger

I am having a hard time seeing the business benefits weighing in more than the user cost (at least for those many organisations out there who rarely ever put that data to proper use). After all, keeping the costs low for the user should be in the core interest of the business as well.

Friday, September 14th, 2018

On using tracking scripts | justmarkup

Weighing up the pros and cons of adding tracking scripts to a website, from a business perspective and from a user perspective.

When looking at the costs versus the benefits it is hard to believe that almost every website is using tracking scripts.

The next time, you implement a tracking script it would be great if you could rethink it and ask yourself if it is really worth it.

Wednesday, September 12th, 2018

Private by Default

Feedbin has removed third-party iframes and JavaScript (oEmbed provides a nice alternative), as well as stripping out Google Analytics, and even web fonts that aren’t self-hosted. This is excellent!

Tuesday, September 11th, 2018

The “Developer Experience” Bait-and-Switch | Infrequently Noted

JavaScript is the web’s CO2. We need some of it, but too much puts the entire ecosystem at risk. Those who emit the most are furthest from suffering the consequences — until the ecosystem collapses. The web will not succeed in the markets and form-factors where computing is headed unless we get JS emissions under control.

Damn, that’s a fine opening! And the rest of this post by Alex is pretty darn great too. He’s absolutely right in calling out the fetishisation of developer experience at the expense of user needs:

The swap is executed by implying that by making things better for developers, users will eventually benefit equivalently. The unstated agreement is that developers share all of the same goals with the same intensity as end users and even managers. This is not true.

I have a feeling that this will be a very bitter pill for many developers to swallow:

If one views the web as a way to address a fixed market of existing, wealthy web users, then it’s reasonable to bias towards richness and lower production costs. If, on the other hand, our primary challenge is in growing the web along with the growth of computing overall, the ability to reasonably access content bumps up in priority. If you believe the web’s future to be at risk due to the unusability of most web experiences for most users, then discussion of developer comfort that isn’t tied to demonstrable gains for marginalized users is at best misguided.

Oh,captain, my captain!

Tools that cost the poorest users to pay wealthy developers are bunk.

The top four web performance challenges

Danielle and I have been doing some front-end consultancy for a local client recently.

We’ve both been enjoying it a lot—it’s exhausting but rewarding work. So if you’d like us to come in and spend a few days with your company’s dev team, please get in touch.

I’ve certainly enjoyed the opportunity to watch Danielle in action, leading a workshop on refactoring React components in a pattern library. She’s incredibly knowledgable in that area.

I’m clueless when it comes to React, but I really enjoy getting down to the nitty-gritty of browser features—HTML, CSS, and JavaScript APIs. Our skillsets complement one another nicely.

This recent work was what prompted my thoughts around the principles of robustness and least power. We spent a day evaluating a continuum of related front-end concerns: semantics, accessibility, performance, and SEO.

When it came to performance, a lot of the work was around figuring out the most suitable metric to prioritise:

  • time to first byte,
  • time to first render,
  • time to first meaningful paint, or
  • time to first meaningful interaction.

And that doesn’t even cover the more easily-measurable numbers like:

  • overall file size,
  • number of requests, or
  • pagespeed insights score.

One outcome was to realise that there’s a tendency (in performance, accessibility, or SEO) to focus on what’s easily measureable, not because it’s necessarily what matters, but precisely because it is easy to measure.

Then we got down to some nuts’n’bolts technology decisions. I took a step back and looked at the state of performance across the web. I thought it would be fun to rank the most troublesome technologies in order of tricksiness. I came up with a top four list.

Here we go, counting down from four to the number one spot…

4. Web fonts

Coming in at number four, it’s web fonts. Sometimes it’s the combined weight of multiple font files that’s the problem, but more often that not, it’s the perceived performance that suffers (mostly because of when the web fonts appear).

Fortunately there’s a straightforward question to ask in this situation: WWZD—What Would Zach Do?

3. Images

At the number three spot, it’s images. There are more of them and they just seem to be getting bigger all the time. And yet, we have more tools at our disposal than ever—better file formats, and excellent browser support for responsive images. Heck, we’re even getting the ability to lazy load images in HTML now.

So, as with web fonts, it feels like the impact of images on performance can be handled, as long as you give them some time and attention.

2. Our JavaScript

Just missing out on making the top spot is the JavaScript that we send down the pipe to our long-suffering users. There’s nothing wrong with the code itself—I’m sure it’s very good. There’s just too damn much of it. And that’s a real performance bottleneck, especially on mobile.

So stop sending so much JavaScript—a solution as simple as Monty Python’s instructions for playing the flute.

1. Other people’s JavaScript

At number one with a bullet, it’s all the crap that someone else tells us to put on our websites. Analytics. Ads. Trackers. Beacons. “It’s just one little script”, they say. And then that one little script calls in another, and another, and another.

It’s so disheartening when you’ve devoted your time and energy into your web font loading strategy, and optimising your images, and unbundling your JavaScript …only to have someone else’s JavaScript just shit all over your nice performance budget.

Here’s the really annoying thing: when I go to performance conferences, or participate in performance discussions, you know who’s nowhere to be found? The people making those third-party scripts.

The narrative around front-end performance is that it’s up to us developers to take responsibility for how our websites perform. But by far the biggest performance impact comes from third-party scripts.

There is a solution to this, but it’s not a technical one. We could refuse to add overweight (and in many cases, unethical) third-party scripts to the sites we build.

I have many, many issues with Google’s AMP project, but I completely acknowledge that it solves a political problem:

No external JavaScript is allowed in an AMP HTML document. This covers third-party libraries, advertising and tracking scripts. This is A-okay with me.

The reasons given for this ban are related to performance and I agree with them completely. Big bloated JavaScript libraries are one of the biggest performance killers on the web.

But how can we take that lesson from AMP and apply it to all our web pages? If we simply refuse to be the one to add those third-party scripts, we get fired, and somebody else comes in who is willing to poison web pages with third-party scripts. There’s nothing to stop companies doing that.

Unless…

Suppose we were to all make a pact that we would stand in solidarity with any of our fellow developers in that sort of situation. A sort of joining-together. A union, if you will.

There is power in a factory, power in the land, power in the hands of the worker, but it all amounts to nothing if together we don’t stand.

There is power in a union.

Monday, September 10th, 2018

Robustness and least power

There’s a great article by Steven Garrity over on A List Apart called Design with Difficult Data. It runs through the advantages of using unusual content to stress-test interfaces, referencing Postel’s Law, AKA the robustness principle:

Be conservative in what you send, be liberal in what you accept.

Even though the robustness principle was formulated for packet-switching, I see it at work in all sorts of disciplines, including design. A good example is in best practices for designing forms:

Every field you ask users to fill out requires some effort. The more effort is needed to fill out a form, the less likely users will complete the form. That’s why the foundational rule of form design is shorter is better — get rid of all inessential fields.

In other words, be conservative in the number of form fields you send to users. But then, when it comes to users filling in those fields:

It’s very common for a few variations of an answer to a question to be possible; for example, when a form asks users to provide information about their state, and a user responds by typing their state’s abbreviation instead of the full name (for example, CA instead of California). The form should accept both formats, and it’s the developer job to convert the data into a consistent format.

In other words, be liberal in what you accept from users.

I find the robustness principle to be an immensely powerful way of figuring out how to approach many design problems. When it comes to figuring out what specific tools or technologies to use, there’s an equally useful principle: the rule of least power:

Choose the least powerful language suitable for a given purpose.

On the face of it, this sounds counter-intuitive; why forego a powerful technology in favour of something less powerful?

Well, power comes with a price. Powerful technologies tend to be more complex, which means they can be trickier to use and trickier to swap out later.

Take the front-end stack, for example: HTML, CSS, and JavaScript. HTML and CSS are declarative, so you don’t get as much precise control as you get with an imperative language like JavaScript. But JavaScript comes with a steeper learning curve and a stricter error-handling model than HTML or CSS.

As a general rule, it’s always worth asking if you can accomplish something with a less powerful technology:

In the web front-end stack — HTML, CSS, JS, and ARIA — if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof, and just works.

  • Instead of using JavaScript to do animation, see if you can do it in CSS instead.
  • Instead of using JavaScript to do simple client-side form validation, try to use HTML input types and attributes like required.
  • Instead of using ARIA to give a certain role value to a div or span, try to use a more suitable HTML element instead.

It sounds a lot like the KISS principle: Keep It Simple, Stupid. But whereas the KISS principle can be applied within a specific technology—like keeping your CSS manageable—the rule of least power is all about evaluating technology; choosing the most appropriate technology for the task at hand.

There are some associated principles, like YAGNI: You Ain’t Gonna Need It. That helps you avoid picking a technology that’s too powerful for your current needs, but which might be suitable in the future: premature optimisation. Or, as Rachel put it, stop solving problems you don’t yet have:

So make sure every bit of code added to your project is there for a reason you can explain, not just because it is part of some standard toolkit or boilerplate.

There’s no shortage of principles, laws, and rules out there, and I find many of them very useful, but if I had to pick just two that are particularly applicable to my work, they would be the robustness principle and the rule of least of power.

After all, if they’re good enough for Tim Berners-Lee…

Sunday, September 9th, 2018

Removing jQuery from GitHub.com frontend | GitHub Engineering

You really don’t need jQuery any more …and that’s thanks to jQuery.

Here, the Github team talk through their process of swapping out jQuery for vanilla JavaScript, as well as their forays into web components (or at least the custom elements bit).

Thursday, September 6th, 2018

Chrome’s NOSCRIPT Intervention - TimKadlec.com

Testing time with Tim.

Long story short, the NOSCRIPT intervention looks like a really great feature for users. More often than not it provides significant reduction in data usage, not to mention the reduction in CPU time—no small thing for the many, many people running affordable, low-powered devices.

Saturday, September 1st, 2018

Offline Content with Service Worker — Chris Ruppel

A step-by-step walkthrough of a really useful service worker pattern: allowing users to save articles for offline reading at the click of a button (kind of like adding the functionality of Instapaper or Pocket to your own site).

Changing Our Approach to Anti-tracking - Future Releases

This is excellent news from Mozilla. Firefox is going to make it easier to block vampiric privacy-leeching and performance-draining third-party scripts and trackers.

In the physical world, users wouldn’t expect hundreds of vendors to follow them from store to store, spying on the products they look at or purchase. Users have the same expectations of privacy on the web, and yet in reality, they are tracked wherever they go.

Monday, August 27th, 2018

Disable scripts for Data Saver users on slow connections - Chrome Platform Status

An excellent idea for people in low-bandwidth situations: automatically disable JavaScript. As long as the site is built with progressive enhancement, there’s no problem (and if not, the user is presented with the choice to enable scripts).

Power to the people!

Tuesday, August 21st, 2018

The Web I Want - DEV Community 👩‍💻👨‍💻

Scores of people who just want to deliver their content and have it look vaguely nice are convinced you need every web technology under the sun to deliver text.

This is very lawnoffgetting but I can relate.

I made my first website about 20 years ago and it delivered as much content as most websites today. It was more accessible, ran faster and easier to develop then 90% of the stuff you’ll read on here.

20 years later I browse the Internet with a few tabs open and I have somehow downloaded many megabytes of data, my laptop is on fire and yet in terms of actual content delivery nothing has really changed.

Monday, August 20th, 2018

Front-End Performance Checklist

A handy pre-launch go/no-go checklist to run through before your countdown.

Thursday, August 16th, 2018

How to build a simple Camera component - Frontend News #4

A step-by-step guide to wrapping up a self-contained bit of functionality (a camera, in this case) into a web component.

Mind you, it would be nice if there were some thought given to fallbacks, like say:

<simple-camera>
<input type="file" accept="image/*">
</simple-camera>

Going Offline by Jeremy Keith – a post by Marc Thiele

This is such a lovely, lovely review from Marc!

Jeremy’s way of writing certainly helps, as a specialised or technical book on a topic like Service Workers, could certainly be one, that bores you to death with dry written explanations. But Jeremy has a friendly, fresh and entertaining way of writing books. Sometimes I caught myself with a grin on my face…

Tuesday, August 14th, 2018

005: Service workers - Web Components Club

I strongly recommend that you read Going Offline by Jeremy Keith. Before his book, I found the concept of service workers quite daunting and convinced myself that it’s one of those things that I’ll have to set aside a big chunk of time to learn. I got through Jeremy’s book in a few hours and felt confident and inspired. This is because he’s very good at explaining concepts in a friendly, concise manner.

Monday, August 13th, 2018

The power of progressive enhancement – No Divide – Medium

The beauty of this approach is that the site doesn’t ever appear broken and the user won’t even be aware that they are getting the ‘default’ experience. With progressive enhancement, every user has their own experience of the site, rather than an experience that the designers and developers demand of them.

A case study in applying progressive enhancement to all aspects of a site.

Progressive enhancement isn’t necessarily more work and it certainly isn’t a non-JavaScript fallback, it’s a change in how we think about our projects. A complete mindset change is required here and it starts by remembering that you don’t build websites for yourself, you build them for others.

Friday, August 10th, 2018

Understanding why Semantic HTML is important, as told by TypeScript.

Oh, this is such a good analogy from Mandy! Choosing the right HTML element is like choosing the right data type in a strongly typed programming language.

Get to know the HTML elements available to you, and use the appropriate one for your content. Make the most it, like you would any language you choose to code with.