Tags: script

955

sparkline

Friday, September 13th, 2019

5G Will Definitely Make the Web Slower, Maybe | Filament Group, Inc.

The Jevons Paradox in action:

Faster networks should fix our performance problems, but so far, they have had an interesting if unintentional impact on the web. This is because historically, faster network speed has enabled developers to deliver more code to users—in particular, more JavaScript code.

And because it’s JavaScript we’re talking about:

Even if folks are on a new fast network, they’re very likely choking on the code we’re sending, rendering the potential speed improvements of 5G moot.

The longer I spend in this field, the more convinced I am that web performance is not a technical problem; it’s a people problem.

Tuesday, September 10th, 2019

Request mapping

The Request Map Generator is a terrific tool. It’s made by Simon Hearne and uses the WebPageTest API.

You pop in a URL, it fetches the page and maps out all the subsequent requests in a nifty interactive diagram of circles, showing how many requests third-party scripts are themselves generating. I’ve found it to be a very effective way of showing the impact of third-party scripts to people who aren’t interested in looking at waterfall diagrams.

I was wondering… Wouldn’t it be great if this were built into browsers?

We already have a “Network” tab in our developer tools. The purpose of this tab is to show requests coming in. The browser already has all the information it needs to make a diagram of requests in the same that the request map generator does.

In Firefox, there’s a little clock icon in the bottom left corner of the “Network” tab. Clicking that shows a pie-chart view of requests. That’s useful, but I’d love it if there were the option to also see the connected circles that the request map generator shows.

Just a thought.

Friday, September 6th, 2019

Offline listings

This is brilliant technique by Remy!

If you’ve got a custom offline page that lists previously-visited pages (like I do on my site), you don’t have to choose between localStorage or IndexedDB—you can read the metadata straight from the HTML of the cached pages instead!

This seems forehead-smackingly obvious in hindsight. I’m totally stealing this.

Tuesday, September 3rd, 2019

How Web Content Can Affect Power Usage | WebKit

The way you build web pages—using IntersectionObserver, for example—can have a direct effect on the climate emergency.

Webpages can be good citizens of battery life.

It’s important to measure the battery impact in Web Inspector and drive those costs down.

Friday, August 23rd, 2019

Brendan Dawes - Adobe Alternatives

Brendan describes the software he’s using to get away from Adobe’s mafia business model.

Is client side A/B testing always a bad idea in your experience? · Issue #53 · csswizardry/ama

Harry enumerates the reasons why client-side A/B testing is terrible:

  • It typically blocks rendering.
  • Providers are almost always off-site.
  • It happens on every page load.
  • No user-benefitting reuse.
  • They likely skip any governance process.

While your engineers are subject to linting, code-reviews, tests, auditors, and more, your marketing team have free rein of the front-end.

Note that the problem here is not A/B testing per se, it’s client-side A/B testing. For some reason, we seem to have collectively decided that A/B testing—like analytics—is something we should offload to the JavaScript parser in the user’s browser.

Friday, August 9th, 2019

Building an extensible app or library with vanilla JS | Go Make Things

This looks like a sensible approach to creating a modular architecture for a complex client-side JavaScript codebase.

I know a lot of people swear by ES6 imports, but this systems worked really well for us. It gave us a simple, modular, extensible framework we can easily build on in the future.

Thursday, August 1st, 2019

Navigation preloads in service workers

There’s a feature in service workers called navigation preloads. It’s relatively recent, so it isn’t supported in every browser, but it’s still well worth using.

Here’s the problem it solves…

If someone makes a return visit to your site, and the service worker you installed on their machine isn’t active yet, the service worker boots up, and then executes its instructions. If those instructions say “fetch the page from the network”, then you’re basically telling the browser to do what it would’ve done anyway if there were no service worker installed. The only difference is that there’s been a slight delay because the service worker had to boot up first.

  1. The service worker activates.
  2. The service worker fetches the file.
  3. The service worker does something with the response.

It’s not a massive performance hit, but it’s still a bit annoying. It would be better if the service worker could boot up and still be requesting the page at the same time, like it would do if no service worker were present. That’s where navigation preloads come in.

  1. The service worker activates while simultaneously requesting the file.
  2. The service worker does something with the response.

Navigation preloads—like the name suggests—are only initiated when someone navigates to a URL on your site, either by following a link, or a bookmark, or by typing a URL directly into a browser. Navigation preloads don’t apply to requests made by a web page for things like images, style sheets, and scripts. By the time a request is made for one of those, the service worker is already up and running.

To enable navigation preloads, call the enable() method on registration.navigationPreload during the activate event in your service worker script. But first do a little feature detection to make sure registration.navigationPreload exists in this browser:

if (registration.navigationPreload) {
  addEventListener('activate', activateEvent => {
    activateEvent.waitUntil(
      registration.navigationPreload.enable()
    );
  });
}

If you’ve already got event listeners on the activate event, that’s absolutely fine: addEventListener isn’t exclusive—you can use it to assign multiple tasks to the same event.

Now you need to make use of navigation preloads when you’re responding to fetch events. So if your strategy is to look in the cache first, there’s probably no point enabling navigation preloads. But if your default strategy is to fetch a page from the network, this will help.

Let’s say your current strategy for handling page requests looks like this:

addEventListener('fetch', fetchEvent => {
  const request = fetchEvent.request;
  if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
      fetch(request)
      .then( responseFromFetch => {
        // maybe cache this response for later here.
        return responseFromFetch;
      })
      .catch( fetchError => {
        return caches.match(request)
        .then( responseFromCache => {
          return responseFromCache || caches.match('/offline');
        });
      })
    );
  }
});

That’s a fairly standard strategy: try the network first; if that doesn’t work, try the cache; as a last resort, show an offline page.

It’s that first step (“try the network first”) that can benefit from navigation preloads. If a preload request is already in flight, you’ll want to use that instead of firing off a new fetch request. Otherwise you’re making two requests for the same file.

To find out if a preload request is underway, you can check for the existence of the preloadResponse promise, which will be made available as a property of the fetch event you’re handling:

fetchEvent.preloadResponse

If that exists, you’ll want to use it instead of fetch(request).

if (fetchEvent.preloadResponse) {
  // do something with fetchEvent.preloadResponse
} else {
  // do something with fetch(request)
}

You could structure your code like this:

addEventListener('fetch', fetchEvent => {
  const request = fetchEvent.request;
  if (request.headers.get('Accept').includes('text/html')) {
    if (fetchEvent.preloadResponse) {
      fetchEvent.respondWith(
        fetchEvent.preloadResponse
        .then( responseFromPreload => {
          // maybe cache this response for later here.
          return responseFromPreload;
        })
        .catch( preloadError => {
          return caches.match(request)
          .then( responseFromCache => {
            return responseFromCache || caches.match('/offline');
          });
        })
      );
    } else {
      fetchEvent.respondWith(
        fetch(request)
        .then( responseFromFetch => {
          // maybe cache this response for later here.
          return responseFromFetch;
        })
        .catch( fetchError => {
          return caches.match(request)
          .then( responseFromCache => {
            return responseFromCache || caches.match('/offline');
          });
        })
      );
    }
  }
});

But that’s not very DRY. Your logic is identical, regardless of whether the response is coming from fetch(request) or from fetchEvent.preloadResponse. It would be better if you could minimise the amount of duplication.

One way of doing that is to abstract away the promise you’re going to use into a variable. Let’s call it retrieve. If a preload is underway, we’ll assign it to that variable:

let retrieve;
if (fetchEvent.preloadResponse) {
  retrieve = fetchEvent.preloadResponse;
}

If there is no preload happening (or this browser doesn’t support it), assign a regular fetch request to the retrieve variable:

let retrieve;
if (fetchEvent.preloadResponse) {
  retrieve = fetchEvent.preloadResponse;
} else {
  retrieve = fetch(request);
}

If you like, you can squash that into a ternary operator:

const retrieve = fetchEvent.preloadResponse ? fetchEvent.preloadResponse : fetch(request);

Use whichever syntax you find more readable.

Now you can apply the same logic, regardless of whether retrieve is a preload navigation or a fetch request:

addEventListener('fetch', fetchEvent => {
  const request = fetchEvent.request;
  if (request.headers.get('Accept').includes('text/html')) {
    const retrieve = fetchEvent.preloadResponse ? fetchEvent.preloadResponse : fetch(request);
    fetchEvent.respondWith(
      retrieve
      .then( responseFromRetrieve => {
        // maybe cache this response for later here.
       return responseFromRetrieve;
      })
      .catch( fetchError => {
        return caches.match(request)
        .then( responseFromCache => {
          return responseFromCache || caches.match('/offline');
        });
      })
    );
  }
});

I think that’s the least invasive way to update your existing service worker script to take advantage of navigation preloads.

Like I said, preload navigations can give a bit of a performance boost if you’re using a network-first strategy. That’s what I’m doing here on adactio.com and on thesession.org so I’ve updated their service workers to take advantage of navigation preloads. But on Resilient Web Design, which uses a cache-first strategy, there wouldn’t be much point enabling navigation preloads.

Jeff Posnick made this point in his write-up of bringing service workers to Google search:

Adding a service worker to your web app means inserting an additional piece of JavaScript that needs to be loaded and executed before your web app gets responses to its requests. If those responses end up coming from a local cache rather than from the network, then the overhead of running the service worker is usually negligible in comparison to the performance win from going cache-first. But if you know that your service worker always has to consult the network when handling navigation requests, using navigation preload is a crucial performance win.

Oh, and those browsers that don’t yet support navigation preloads? No problem. It’s a progressive enhancement. Everything still works just like it did before. And having a service worker on your site in the first place is itself a progressive enhancement. So enabling navigation preloads is like a progressive enhancement within a progressive enhancement. It’s progressive enhancements all the way down!

By the way, if all of this service worker stuff sounds like gibberish, but you wish you understood it, I think my book, Going Offline, will prove quite valuable.

The web without the web - DEV Community 👩‍💻👨‍💻

I love React. I love how server side rendering React apps is trivial because it all compiles down to vanilla HTML rather than web components, effectively turning it into a kickass template engine that can come alive. I love the way you can very effectively still do progressive enhancement by using completely semantic markup and then letting hydration do more to it.

I also hate React. I hate React because these behaviours are not defaults. React is not gonna warn you if you make a form using divs and unlabelled textboxes and send the whole thing to a server. I hate React because CSS-in-JS approaches by default encourage you to write completely self contained one off components rather than trying to build a website UI up as a whole. I hate the way server side rendering and progressive enhancement are not defaults, but rather things you have to go out of your way to do.

An absolutely brilliant post by Laura on how the priorites baked into JavaScript tools like React are really out of whack. They’ll make sure your behind-the-scenes code is super clean, but not give a rat’s ass for the quality of the output that users have to interact with.

And if you want to adjust the front-end code, you’ve got to set up all this tooling just to change a div to a button. That’s quite a barrier to entry.

In elevating frontend to the land of Serious Code we have not just made things incredibly over-engineered but we have also set fire to all the ladders that we used to get up here in the first place.

AMEN!

I love React because it lets me do my best work faster and more easily. I hate React because the culture around it more than the library itself actively prevents other people from doing their best work.

Tuesday, July 30th, 2019

Don’t build that app! – Luke Jackson - YouTube

This is a fascinating look at how you can get the benefits of React and npm without using React and npm.

Here’s an accompanying article on the same topic.

Saturday, July 27th, 2019

How to test vanilla JS performance | Go Make Things

Did you know about console.time() and console.timeEnd()? I did not.

Thursday, July 25th, 2019

Principle

I like good design principles. I collect design principles—of varying quality—at principles.adactio.com. Ben Brignell also has a (much larger) collection at principles.design.

You can spot the less useful design principles after a while. They tend to be wishy-washy; more like empty aspirational exhortations than genuinely useful guidelines for alignment. I’ve written about what makes for good design principles before. Matthew Ström also asked—and answered—What makes a good design principle?

  • Good design principles are memorable.
  • Good design principles help you say no.
  • Good design principles aren’t truisms.
  • Good design principles are applicable.

I like those. They’re like design principles for design principles.

One set of design principles that I’ve included in my collection is from gov.uk: government design principles . I think they’re very well thought-through (although I’m always suspicious when I see a nice even number like 10 for the amount of items in the list). There’s a great line in design principle number two—Do less:

Government should only do what only government can do.

This wasn’t a theoretical issue. The multiple departmental websites that preceded gov.uk were notorious for having too much irrelevant content—content that was readily available elsewhere. It was downright wasteful to duplicate that content on a government site. It wasn’t appropriate.

Appropriateness is something I keep coming back to when it comes to evaluating web technologies. I don’t think there are good tools and bad tools; just tools that are appropriate or inapropriate for the task at hand. Whether it’s task runners or JavaScript frameworks, appropriateness feels like it should be the deciding factor.

I think that the design principle from GDS could be abstracted into a general technology principle:

Any particular technology should only do what only that particular technology can do.

Take JavaScript, for example. It feels “wrong” when a powerful client-side JavaScript framework is applied to something that could be accomplished using HTML. Making a blog that’s a single page app is over-engineering. It violates this principle:

JavaScript should only do what only JavaScript can do.

Need to manage state or immediately update the interface in response to user action? Only JavaScript can do that. But if you need to present the user with some static content, JavaScript can do that …but it’s not the only technology that can do that. HTML would be more appropriate.

I realise that this is basically a reformulation of one of my favourite design principles, the rule of least power:

Choose the least powerful language suitable for a given purpose.

Or, as Derek put it:

In the web front-end stack — HTML, CSS, JS, and ARIA — if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof, and just works.

ARIA should only do what only ARIA can do.

JavaScript should only do what only JavaScript can do.

CSS should only do what only CSS can do.

HTML should only do what only HTML can do.

What I Like About Vue - daverupert.com

Dave enumerates the things about Vue that click for him. The component structure matches his mental model, and crucially, it’s relative straightforward to add Vue to an existing project instead of ripping everything out and doing things a certain way:

In my experience Angular, React, and a lot of other frameworks ultimately require you to go all in early and establish a large toolchain around these frameworks.

Wednesday, July 24th, 2019

Progressive Enhancement

This post was originally written in 2015, but upon re-reading it today, it still (just about) holds up, so I finally hit publish.

Friday, July 19th, 2019

Micro Frontends

Chris succinctly describes the multiple-iframes-with-multiple-codebases approach to web development, AKA “micro frontends”:

The idea really is that you might build a React app and I build a Vue app and we’ll slap ‘em together on the same page. I definitely come from an era where we laughed-then-winced when we found sites that used multiple versions of jQuery on the same page, plus one thing that loaded all of MooTools and Prototype thrown on there seemingly by accident. We winced because that was a bucket full of JavaScript, mostly duplicated for no reason, causing bugs and slowing down the page. This doesn’t seem all that much different.

Thursday, July 18th, 2019

Ralph Lavelle: On resilience

Thoughts on frameworks, prompted by a re-reading of Resilient Web Design. I quite like the book being described as a “a bird’s-eye view of the whole web design circus.”

Wednesday, July 3rd, 2019

How Google Pagespeed works: Improve Your Score and Search Engine Ranking

Ben shares the secret of SEO. Spoiler: the villain turns out to be Too Much JavaScript. Again.

Time to Interactive (TTI) is the most impactful metric to your performance score.

Therefore, to receive a high PageSpeed score, you will need a speedy TTI measurement.

At a high level, there are two significant factors that hugely influence TTI:

  • The amount of JavaScript delivered to the page
  • The run time of JavaScript tasks on the main thread

Tuesday, July 2nd, 2019

The trimCache function in Going Offline …again

It seems that some code that I wrote in Going Offline is haunted. It’s the trimCache function.

First, there was the issue of a typo. Or maybe it’s more of a brainfart than a typo, but either way, there’s a mistake in the syntax that was published in the book.

Now it turns out that there’s also a problem with my logic.

To recap, this is a function that takes two arguments: the name of a cache, and the maximum number of items that cache should hold.

function trimCache(cacheName, maxItems) {

First, we open up the cache:

caches.open(cacheName)
.then( cache => {

Then, we get the items (keys) in that cache:

cache.keys()
.then(keys => {

Now we compare the number of items (keys.length) to the maximum number of items allowed:

if (keys.length > maxItems) {

If there are too many items, delete the first item in the cache—that should be the oldest item:

cache.delete(keys[0])

And then run the function again:

.then(
    trimCache(cacheName, maxItems)
);

A-ha! See the problem?

Neither did I.

It turns out that, even though I’m using then, the function will be invoked immediately, instead of waiting until the first item has been deleted.

Trys helped me understand what was going on by making a useful analogy. You know when you use setTimeout, you can’t put a function—complete with parentheses—as the first argument?

window.setTimeout(doSomething(someValue), 1000);

In that example, doSomething(someValue) will be invoked immediately—not after 1000 milliseconds. Instead, you need to create an anonymous function like this:

window.setTimeout( function() {
    doSomething(someValue)
}, 1000);

Well, it’s the same in my trimCache function. Instead of this:

cache.delete(keys[0])
.then(
    trimCache(cacheName, maxItems)
);

I need to do this:

cache.delete(keys[0])
.then( function() {
    trimCache(cacheName, maxItems)
});

Or, if you prefer the more modern arrow function syntax:

cache.delete(keys[0])
.then( () => {
    trimCache(cacheName, maxItems)
});

Either way, I have to wrap the recursive function call in an anonymous function.

Here’s a gist with the updated trimCache function.

What’s annoying is that this mistake wasn’t throwing an error. Instead, it was causing a performance problem. I’m using this pattern right here on my own site, and whenever my cache of pages or images gets too big, the trimCaches function would get called …and then wouldn’t stop running.

I’m very glad that—witht the help of Trys at last week’s Homebrew Website Club Brighton—I was finally able to get to the bottom of this. If you’re using the trimCache function in your service worker, please update the code accordingly.

Management regrets the error.

Monday, July 1st, 2019

Why Did I Have Difficulty Learning React? - Snook.ca

When people talk about learning React, I think that React, in and of itself, is relatively easy to understand. At least, I felt it was. I have components. I have JSX. I hit some hiccups with required keys or making sure I was wrapping child elements properly. But overall, I felt like I grasped it well enough.

Throw in everything else at the same time, though, and things get confusing because it’s hard at first to recognize what belongs to what. “Oh, this is Redux. That is React. That other thing is lodash. Got it.”

This resonates a lot with Dave’s post:

React is an ecosystem. I feel like it’s a disservice to anyone trying to learn to diminish all that React entails. React shows up on the scene with Babel, Webpack, and JSX (which each have their own learning curve) then quickly branches out into technologies like Redux, React-Router, Immutable.js, Axios, Jest, Next.js, Create-React-App, GraphQL, and whatever weird plugin you need for your app.

8 DOM features you didn’t know existed - LogRocket Blog

If you ignore the slightly insulting and condescending clickbaity title, this is a handy run-down of eight browser features with good support:

  1. extra arguments in addEventListener(),
  2. scrollTo(),
  3. extra arguments in setTimeout() and setInterval(),
  4. the defaultChecked property for checkboxes,
  5. normalize() and wholeText for strings of text,
  6. insertAdjacentElement() and insertAdjacentText(),
  7. event.detail, and
  8. scrollHeight and scrollWidth.