Tags: performance

225

sparkline

Saturday, February 18th, 2017

Base64 Encoding & Performance, Part 1: What’s Up with Base64?

Harry clearly outlines the performance problems of Base64 encoding images in stylesheets. He’s got a follow-up post with sample data.

Sunday, February 12th, 2017

Accessibility and Performance | MarcySutton.com

When I heard about Universal JavaScript apps (a.k.a. isomorphic JavaScript), despite the “framework hotness”, I saw real value for accessibility and performance together. With this technique, a JavaScript app is rendered as a complete HTML payload from the server using Node.js, which is then upgraded as client resources download and execute. All of a sudden your Angular app could be usable a lot sooner, even without browser JS. Bells started going off in my head: “this could help accessible user experience, too!”

Thursday, February 9th, 2017

Performance Under Pressure - performance, responsive web design - Bocoup

The transcript of a really great—and entertaining—talk on performance by Wilto. I may have laughed out loud at points.

Most of the web really sucks if you have a slow connection

Just like many people develop with an average connection speed in mind, many people have a fixed view of who a user is. Maybe they think there are customers with a lot of money with fast connections and customers who won’t spend money on slow connections. That is, very roughly speaking, perhaps true on average, but sites don’t operate on average, they operate in particular domains.

Friday, February 3rd, 2017

Isomorphic rendering on the JAM Stack

Phil describes the process of implementing the holy grail of web architecture (which perhaps isn’t as difficult as everyone seems to think it is):

I have been experimenting with something that seemed obvious to me for a while. A web development model which gives a pre-rendered, ready-to-consume, straight-into-the-eyeballs web page at every URL of a site. One which, once loaded, then behaves like a client-side, single page app.

Now that’s resilient web design!

Thursday, January 19th, 2017

Looking beyond launch

It’s all go, go, go at Clearleft while we’re working on a new version of our website …accompanied by a brand new identity. It’s an exciting time in the studio, tinged with the slight stress that comes with any kind of unveiling like this.

I think it’s good to remember that this is the web. I keep telling myself that we’re not unveiling something carved in stone. Even after the launch we can keep making the site better. In fact, if we wait until everything is perfect before we launch, we’ll probably never launch at all.

On the other hand, you only get one chance to make a first impression, right? So it’s got to be good …but it doesn’t have to be done. A website is never done.

I’ve got to get comfortable with that. There’s lots of things that I’d like to be done in time for launch, but realistically it’s fine if those things are completed in the subsequent days or weeks.

Adding a service worker and making a nice offline experience? I really want to do that …but it can wait.

What about other performance tweaks? Yes, we’ll to try have every asset—images, fonts—optimised …but maybe not from day one.

Making sure that each page has good metadata—Open Graph? Twitter Cards? Microformats? Maybe even AMP? Sure …but not just yet.

Having gorgeous animations? Again, I really want to have them but as Val rightly points out, animations are an enhancement—a really, really great enhancement.

If anything, putting the site live before doing all these things acts as an incentive to make sure they get done.

So when you see the new site, if you view source or run it through Web Page Test and spot areas for improvement, rest assured we’re on it.

Understanding the Critical Rendering Path

A nice and clear description of how browsers parse and render web pages.

Sunday, January 15th, 2017

Modernizing our Progressive Enhancement Delivery | Filament Group, Inc., Boston, MA

Scott runs through the latest improvements to the Filament Group website. There’s a lot about HTTP2, but also a dab of service workers (using a similar recipe to my site).

Thursday, January 12th, 2017

Saving you bandwidth on Google+ through machine learning

This is an interesting use of voodoo magic (or “machine learning” as we call it now) by Google to interpolate data in a small image to create a larger version. A win for performance.

Wednesday, January 11th, 2017

Making Resilient Web Design work offline

I’ve written before about taking an online book offline, documenting the process behind the web version of HTML5 For Web Designers. A book is quite a static thing so it’s safe to take a fairly aggressive offline-first approach. In fact, a static unchanging book is one of the few situations that AppCache works for. Of course a service worker is better, but until AppCache is removed from browsers (and until service worker is supported across the board), I’m using both. I wouldn’t recommend that for most sites though—for most sites, use a service worker to enhance it, and avoid AppCache like the plague.

For Resilient Web Design, I took a similar approach to HTML5 For Web Designers but I knew that there was a good chance that some of the content would be getting tweaked at least for a while. So while the approach is still cache-first, I decided to keep the cache fairly fresh.

Here’s my service worker. It starts with the usual stuff: when the service worker is installed, there’s a list of static assets to cache. In this case, that list is literally everything; all the HTML, CSS, JavaScript, and images for the whole site. Again, this is a pattern that works well for a book, but wouldn’t be right for other kinds of websites.

The real heavy lifting happens with the fetch event. This is where the logic sits for what the service worker should do everytime there’s a request for a resource. I’ve documented the logic with comments:

// Look in the cache first, fall back to the network
  // CACHE
  // Did we find the file in the cache?
      // If so, fetch a fresh copy from the network in the background
      // NETWORK
          // Stash the fresh copy in the cache
  // NETWORK
  // If the file wasn't in the cache, make a network request
      // Stash a fresh copy in the cache in the background
  // OFFLINE
  // If the request is for an image, show an offline placeholder
  // If the request is for a page, show an offline message

So my order of preference is:

  1. Try the cache first,
  2. Try the network second,
  3. Fallback to a placeholder as a last resort.

Leaving aside that third part, regardless of whether the response is served straight from the cache or from the network, the cache gets a top-up. If the response is being served from the cache, there’s an additional network request made to get a fresh copy of the resource that was just served. This means that the user might be seeing a slightly stale version of a file, but they’ll get the fresher version next time round.

Again, I think this acceptable for a book where the tweaks and changes should be fairly minor, but I definitely wouldn’t want to do it on a more dynamic site where the freshness matters more.

Here’s what it usually likes like when a file is served up from the cache:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      return responseFromCache;
  }

I’ve introduced an extra step where the fresher version is fetched from the network. This is where the code can look a bit confusing: the network request is happening in the background after the cached file has already been returned, but the code appears before the return statement:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      // If so, fetch a fresh copy from the network in the background
      event.waitUntil(
          // NETWORK
          fetch(request)
          .then( responseFromFetch => {
              // Stash the fresh copy in the cache
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      return responseFromCache;
  }

It’s asynchronous, see? So even though all that network code appears before the return statement, it’s pretty much guaranteed to complete after the cache response has been returned. You can verify this by putting in some console.log statements:

caches.match(request)
.then( responseFromCache => {
  if (responseFromCache) {
      event.waitUntil(
          fetch(request)
          .then( responseFromFetch => {
              console.log('Got a response from the network.');
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      console.log('Got a response from the cache.');
      return responseFromCache;
  }

Those log statements will appear in this order:

Got a response from the cache.
Got a response from the network.

That’s the opposite order in which they appear in the code. Everything inside the event.waitUntil part is asynchronous.

Here’s the catch: this kind of asynchronous waitUntil hasn’t landed in all the browsers yet. The code I’ve written will fail.

But never fear! Jake has written a polyfill. All I need to do is include that at the start of my serviceworker.js file and I’m good to go:

// Import Jake's polyfill for async waitUntil
importScripts('/js/async-waituntil.js');

I’m also using it when a file isn’t found in the cache, and is returned from the network instead. Here’s what the usual network code looks like:

fetch(request)
  .then( responseFromFetch => {
    return responseFromFetch;
  })

I want to also store that response in the cache, but I want to do it asynchronously—I don’t care how long it takes to put the file in the cache as long as the user gets the response straight away.

Technically, I’m not putting the response in the cache; I’m putting a copy of the response in the cache (it’s a stream, so I need to clone it if I want to do more than one thing with it).

fetch(request)
  .then( responseFromFetch => {
    // Stash a fresh copy in the cache in the background
    let responseCopy = responseFromFetch.clone();
    event.waitUntil(
      caches.open(staticCacheName)
      .then( cache => {
          cache.put(request, responseCopy);
      })
    );
    return responseFromFetch;
  })

That all seems to be working well in browsers that support service workers. For legacy browsers, like Mobile Safari, there’s the much blunter caveman logic of an AppCache manifest.

Here’s the JavaScript that decides whether a browser gets the service worker or the AppCache:

if ('serviceWorker' in navigator) {
  // If service workers are supported
  navigator.serviceWorker.register('/serviceworker.js');
} else if ('applicationCache' in window) {
  // Otherwise inject an iframe to use appcache
  var iframe = document.createElement('iframe');
  iframe.setAttribute('src', '/appcache.html');
  iframe.setAttribute('style', 'width: 0; height: 0; border: 0');
  document.querySelector('footer').appendChild(iframe);
}

Either way, people are making full use of the offline nature of the book and that makes me very happy indeed.

A Tale of Four Caches · Yoav Weiss

A cute explanation of different browser caches:

  • memory cache,
  • service worker cache,
  • disk cache, and
  • push cache.

Thursday, January 5th, 2017

What AMP (Maybe) Means for News Developers - Features - Source: An OpenNews project

So if AMP is useful it’s because it raises the stakes. If we (news developers) don’t figure out faster ways to load our pages for readers, then we’re going to lose a lot of magic.

A number of developers answered questions on the potential effects of Google’s AMP project. This answer resonates a lot with my own feelings:

AMP is basically web performance best practices dressed up as a file format. That’s a very clever solution to what is, at heart, a cultural problem: when management (in one form or another) comes to the CMS team at a news organization and asks to add more junk to the site, saying “we can’t do that because AMP” is a much more powerful argument than trying to explain why a pop-over “Like us on Facebook!” modal is driving our readers to drink.

But the danger is that AMP turns into a long-term “solution” instead of a stop-gap:

So in a sense, the best possible outcome is that AMP is disruptive enough to shake the boardroom into understanding the importance of performance in platform decisions (and making the hard business decisions this demands), but that developers are allowed to implement those decisions in standard HTML instead of adding yet another delivery format to their export pipeline.

The ideal situation looks a lot more like Tim’s proposal:

I would be much more pleased with AMP if it was a spec for Google’s best-practice recommendations rather than effectively a new non-standard format. By using standard HTML/CSS/JS as the building blocks, they’re starting on the right foot, but the reliance on a Google-decreed AMP JavaScript library, use of separate AMP-specific URLs, and encouragement to use a Google-provided CDN are all worrying aspects.

Monday, December 26th, 2016

High Performance Browser Networking (O’Reilly)

Did you know that Ilya’s book was available in its entirety online? I didn’t. But now that I do, I think it’s time I got stuck in and tried to understand the low-level underpinnings of the internet and the web.

Saturday, December 24th, 2016

10 things I learned making the fastest site in the world

Behind the amusing banter there’s some really solid performance advice in here. Good stuff.

Client Side Rendering (CSR), or as I call it “setting money on fire and throwing it in a river” has its uses, but for this site would have been madness.

Monday, December 19th, 2016

Front-End Performance Checklist 2017

You can print out this PDF and then have the satisfaction of ticking off each item on the list as you build your website.

Thursday, November 17th, 2016

Less JavaScript

Every front-end developer at Clearleft went to FFConf last Friday: me, Mark, Graham, Charlotte, and Danielle. We weren’t about to pass up the opportunity to attend a world-class dev conference right here in our home base of Brighton.

The day was unsurprisingly excellent. All the speakers brought their A-game on a wide range of topics. Of course JavaScript was covered, but there was also plenty of mindfood on CSS, accessibility, progressive enhancement, dev tools, creative coding, and even emoji.

Normally FFConf would be a good opportunity to catch up with some Pauls from the Google devrel team, but because of an unfortunate scheduling clash this year, all the Pauls were at Chrome Dev Summit 2016 on the other side of the Atlantic.

I’ve been catching up on the videos from the event. There’s plenty of tech-related stuff: dev tools, web components, and plenty of talk about progressive web apps. But there was also a very, very heavy focus on performance. I don’t just mean performance at the shallow scale of file size and optimisation, but a genuine questioning of the impact of our developer workflows and tools.

In his talk on service workers (what else?), Jake makes the point that not everything needs to be a single page app, echoing Ada’s talk at FFConf.

He makes the point that if you really want fast rendering, nothing on the client side quite beats a server render.

They’ve written a lot of JavaScript to make this quite slow.

Unfortunately, all too often, I hear people say that a progressive web app must be a single page app. And I am not so sure. You might not need a single page app. A single page app can end up being a lot of work and slower. There’s a lot of cargo-culting around single page apps.

Alex followed up his barnstorming talk from the Polymer Summit with some more uncomfortable truths about how mobile phones work.

Cell networks are basically kryptonite to the protocols and assumptions that the web was built on.

And JavaScript frameworks aren’t helping. Quite the opposite.

But make no mistake: if you’re using one of today’s more popular JavaScript frameworks in the most naive way, you are failing by default. There is no sugarcoating this.

Today’s frameworks are mostly a sign of ignorance, or privilege, or both. The good news is that we can fix the ignorance.

Monday, November 14th, 2016

kdzwinel/progress-bar-animation: Making a Doughnut Progress Bar - research notes

This is a thorough write-up of an interesting case where SVG looks like the right tool for the job, but further research leads to some sad-making conclusions.

I love SVG. It’s elegant, scalable and works everywhere. It’s perfect for mobile… as long as it doesn’t move. There is no way to animate it smoothly on Android.

Wednesday, November 2nd, 2016

Performance and assumptions | susan jean robertson

We all make assumptions, it’s natural and normal. But we also need to be jolted out of those assumptions on a regular basis to help us see that not everyone uses the web the way we do. I’ve talked about loving doing support for that reason, but I also love it when I’m on a slow network, it shows me how some people experience the web all the time; that’s good for me.

I’m privileged to have fast devices and fast, broadband internet, along with a lot of other privileges. Not remembering that privilege while I work and assuming that everyone is like me is, quite possibly, one of the biggest mistakes I can make.

Tuesday, November 1st, 2016

Web fonts, boy, I don’t know – Monica Dinculescu

Monica takes a look at the options out there for loading web fonts and settles on a smart asynchronous lazy-loading approach.

Saturday, October 29th, 2016

Assumptions

Last year Benedict Evans wrote about the worldwide proliferation and growth of smartphones. Nolan referenced that post when he extrapolated the kind of experience people will be having:

As Benedict Evans has noted, the next billion people who are poised to come online will be using the internet almost exclusively through smartphones. And if Google’s plans with Android One are any indication, then we have a fairly good idea of what kind of devices the “next billion” will be using:

  • They’ll mostly be running Android.
  • They’ll have decent specs (1GB RAM, quad-core processors).
  • They’ll have an evergreen browser and WebView (Android 5+).
  • What they won’t have, however, is a reliable internet connection.

This is the same argument that Tom made in his presentation at Responsive Field Day. The main point is that network conditions are unreliable, and I absolutely agree that we need to be very, very mindful of that. But I’m not so sure about the other conditions either. They smell like assumptions:

Assumptions are the problem. Whether it’s assumptions about screen size, assumptions about being able-bodied, assumptions about network connectivity, or assumptions about browser capabilities, I don’t think any assumptions are a safe bet. Now you might quite reasonably say that we have to make some assumptions when we’re building on the web, and you’d be right. But I think we should still aim to keep them to a minimum.

It’s not necessarily true that all those new web users will be running WebView browser like Chrome—there are millions of Opera Mini users, and I would expect that number to rise, given all the speed and cost benefits that proxy browsing brings.

I also don’t think that just because a device is a smartphone it necessarily means that it’s a pocket supercomputer. It might seem like a reasonable assumption to make, given the specs of even a low-end smartphone, but the specs don’t tell the whole story.

Alex gave a great presentation at the recent Polymer Summit. He dives deep into exactly how smartphones at the lower end of the market deal with websites.

I don’t normally enjoy listening to talk of hardware and specs, but Alex makes the topic very compelling by tying it directly to how we build websites. In short, we’re using waaaaay too much JavaScript. The message here is not “don’t use JavaScript” but rather “use JavaScript wisely.” Alas, many of the current crop of monolithic frameworks aren’t well suited to this.

Alex’s talk prompted Michael Scharnagl to take a look back at past assumptions and lessons learned on the web, from responsive design to progressive web apps.

We are consistently improving and we often have to realize that our assumptions are wrong.

This is particularly true when we’re making assumptions about how people will access the web.

It’s not enough to talk about the “next billion” in abstract, like an opportunity to reach teeming masses of people ripe for monetization. We need to understand their lives and their priorities with the sort of detail that can build empathy for other people living under vastly different circumstances.

That’s from an article Ethan linked to, noting: