Tags: fr

1744

sparkline

Monday, May 20th, 2019

Web Bloat Score Calculator

Page web bloat score (WebBS for short) is calculated as follows:

WebBS = TotalPageSize / PageImageSize

Yes, this is a tongue-in-cheek somewhat arbitrary measurement, but it’s well worth reading through the rationale for it.

How can the image of a page be smaller than the page itself?

Thursday, May 16th, 2019

Lighthouse | Eric Bailey

What if accessibility were a ranking signal for Google search results?

Here’s a thought: what if Google put its thumb on the scale again, only this time for accessibility? What if it treated the Lighthouse accessibility score as a first-class ranking metric?

Welcome to Acccessible App | Accessible App

A very welcome project from Marcus Herrmann, documenting how to make common interaction patterns accessible in popular frameworks: Vue, React, and Angular.

Thursday, May 9th, 2019

Head’s role

I have a bittersweet feeling today. Danielle is moving on from Clearleft.

I used to get really down when people left. Over time I’ve learned not to take it as such a bad thing. I mean, of course it’s sad when someone moves on, but for them, it’s exciting. And I should be sharing in that excitement, not putting a damper on it.

Besides, people tend to stay at Clearleft for years and years—in the tech world, that’s unheard of. So it’s not really so terrible when they decide to head out to pastures new. They’ll always be Clearlefties. Just look at the lovely parting words from Harry, Paul, Ellen, and Ben:

Working at Clearleft was one of the best decisions I ever made. 6 years of some work that I’m most proud of, amongst some of the finest thinkers I’ve ever met.

(Side note: I’ve been thinking about starting a podcast where I chat to ex-Clearlefties. We could reflect on the past, look to the future, and generally just have a catch-up. Would that be self indulgent or interesting? Let me know what you think.)

So of course I’m going to miss working with Danielle, but as with other former ‘lefties, I’m genuinely excited to see what happens next for her. Clearleft has had an excellent three years of her time and now it’s another company’s turn.

In the spirit of “one door closes, another opens,” Danielle’s departure creates an opportunity for someone else. Fancy working at Clearleft? Well, we’re looking for a head of front-end development.

Do you remember back at the start of the year when we were hiring a front-end developer, and I wrote about writing job postings?

My first instinct was to look at other job ads and take my cue from them. But, let’s face it, most job ads are badly written, and prone to turning into laundry lists. So I decided to just write like I normally would. You know, like a human.

That worked out really well. We ended up hiring the ridiculously talented Trys Mudford. Success!

So I’ve taken the same approach with this job ad. I’ve tried to paint as clear and honest a picture as I can of what this role would entail. Like it says, there are three main parts to the job:

  • business support,
  • technical leadership, and
  • professional development.

Now, I could easily imagine someone reading the job description and thinking, “Nope! Not for me.” Let’s face it: There Will Be Meetings. And a whole lotta context switching:

Within the course of one day, you might go from thinking about thorny code problems to helping someone on your team with their career plans to figuring out how to land new business in a previously uncharted area of technology.

I can equally imagine someone reading that and thinking “Yes! This is what I’ve been waiting for.”

Oh, and in case you’re wondering why I’m not taking this role …well, in the short term, I will for a while, but I’d consider myself qualified for maybe one third to one half of the required tasks. Yes, I can handle the professional development side of things (in fact, I really, really enjoy that). I can handle some of the technical leadership stuff—if we’re talking about HTML, CSS, JavaScript, accessibility, and performance. But all of the back-of-the-front-end stuff—build tools, libraries, toolchains—is beyond me. And I think I’d be rubbish at the business support stuff, mostly because that doesn’t excite me much. But maybe it excites you! If so, you should apply.

I can picture a few scenarios where this role could be the ideal career move…

Suppose you’re a lead developer at a product company. You enjoy leading a team of devs, and you like setting the technical direction when it comes to the tools and techniques being used. But maybe you’re frustrated by always working on the same product with the same tech stack. The agency world, where every project is different, might be exactly what you’re looking for.

Or maybe you’re an accomplished and experienced front-end developer, freelancing and contracting for years. Perhaps you’re less enamoured with being so hands-on with the code all the time. Maybe you’ve realised that what you really enjoy is solving problems and evaluating techologies, and you’d be absolutely fine with having someone else take care of the implementation. Moving into a lead role like this might be the perfect way to make the best use of your time and have more impact with your decisions.

You get the idea. If any of this is sounding intriguing to you, you should definitely apply for the role. What do you have to lose?

Also, as it says in the job ad:

If you’re from a group that is under-represented in tech, please don’t hesitate to get in touch.

Distinguishing cached vs. network HTML requests in a Service Worker | Trys Mudford

Less than 24 hours after I put the call out for a solution to this gnarly service worker challenge, Trys has come up with a solution.

Wednesday, May 8th, 2019

Timing out

Service workers are great for creating a good user experience when someone is offline. Heck, the book I wrote about service workers is literally called Going Offline.

But in some ways, the offline experience is relatively easy to handle. It’s a binary situation; either you’re online or you’re offline. What’s more challenging—and probably more common—is the situation that Jake calls Lie-Fi. That’s when technically you’ve got a network connection …but it’s a shitty connection, like one bar of mobile signal. In that situation, because there’s technically a connection, the user gets a slow frustrating experience. Whatever code you’ve got in your service worker for handling offline situations will never get triggered. When you’re handling fetch events inside a service worker, there’s no automatic time-out.

But you can make one.

That’s what I’ve done recently here on adactio.com. Before showing you what I added to my service worker script to make that happen, let me walk you through my existing strategy for handling offline situations.

Service worker strategies

Alright, so in my service worker script, I’ve got a block of code for handling requests from fetch events:

addEventListener('fetch', fetchEvent => {
        const request = fetchEvent.request;
    // Do something with this request.
});

I’ve got two strategies in my code. One is for dealing with requests for pages:

if (request.headers.get('Accept').includes('text/html')) {
    // Code for handling page requests.
}

By adding an else clause I can have a different strategy for dealing with requests for anything else—images, style sheets, scripts, and so on:

if (request.headers.get('Accept').includes('text/html')) {
    // Code for handling page requests.
} else {
    // Code for handling everthing else.
}

For page requests, I’m going to try to go the network first:

fetchEvent.respondWith(
    fetch(request)
    .then( responseFromFetch => {
        return responseFromFetch;
    })

My logic is:

When someone requests a page, try to fetch it from the network.

If that doesn’t work, we’re in an offline situation. That triggers the catch clause. That’s where I have my offline strategy: show a custom offline page that I’ve previously cached (during the install event):

.catch( fetchError => {
    return caches.match('/offline');
})

Now my logic has been expanded to this:

When someone requests a page, try to fetch it from the network, but if that doesn’t work, show a custom offline page instead.

So my overall code for dealing with requests for pages looks like this:

if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
        fetch(request)
        .then( responseFromFetch => {
            return responseFromFetch;
        })
        .catch( fetchError => {
            return caches.match('/offline');
        })
    );
}

Now I can fill in the else statement that handles everything else—images, style sheets, scripts, and so on. Here my strategy is different. I’m looking in my caches first, and I only fetch the file from network if the file can’t be found in any cache:

caches.match(request)
.then( responseFromCache => {
    return responseFromCache || fetch(request);
})

Here’s all that fetch-handling code put together:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            fetch(request)
            .then( responseFromFetch => {
                return responseFromFetch;
            })
            .catch( fetchError => {
                return caches.match('/offline');
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

Good.

Cache as you go

Now I want to introduce an extra step in the part of the code where I deal with requests for pages. Whenever I fetch a page from the network, I’m going to take the opportunity to squirrel it away in a cache. I’m calling that cache “pages”. I’m imaginative like that.

fetchEvent.respondWith(
    fetch(request)
    .then( responseFromFetch => {
        const copy = responseFromFetch.clone();
        try {
            fetchEvent.waitUntil(
                caches.open('pages')
                .then( pagesCache => {
                    return pagesCache.put(request, copy);
                })
            )
        } catch(error) {
            console.error(error);
        }
        return responseFromFetch;
    })

You’ll notice that I can’t put the response itself (responseFromCache) into the cache. That’s a stream that I only get to use once. Instead I need to make a copy:

const copy = responseFromFetch.clone();

That’s what gets put in the pages cache:

fetchEvent.waitUntil(
    caches.open('pages')
    .then( pagesCache => {
        return pagesCache.put(request, copy);
    })
)

Now my logic for page requests has an extra piece to it:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, show a custom offline page instead.

Here’s my updated fetch-handling code:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            fetch(request)
            .then( responseFromFetch => {
                const copy = responseFromFetch.clone();
                try {
                    fetchEvent.waitUntil(
                        caches.open('pages')
                        .then( pagesCache => {
                            return pagesCache.put(request, copy);
                        })
                    )
                } catch(error) {
                    console.error(error);
                }
                return responseFromFetch;
            })
            .catch( fetchError => {
                return caches.match('/offline');
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

I call this the cache-as-you-go pattern. The more pages someone views on my site, the more pages they’ll have cached.

Now that there’s an ever-growing cache of previously visited pages, I can update my offline fallback. Currently, I reach straight for the custom offline page:

.catch( fetchError => {
    return caches.match('/offline');
})

But now I can try looking for a cached copy of the requested page first:

.catch( fetchError => {
    caches.match(request)
    .then( responseFromCache => {
        return responseFromCache || caches.match('/offline');
    })
});

Now my offline logic is expanded:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead.

I can also access this ever-growing cache of pages from my custom offline page to show people which pages they can revisit, even if there’s no internet connection.

So far, so good. Everything I’ve outlined so far is a good robust strategy for handling offline situations. Now I’m going to deal with the lie-fi situation, and it’s that cache-as-you-go strategy that sets me up nicely.

Timing out

I want to throw this addition into my logic:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead (but if the request is taking too long, try to show a cached version of the page).

The first thing I’m going to do is rewrite my code a bit. If the fetch event is for a page, I’m going to respond with a promise:

if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
        new Promise( resolveWithResponse => {
            // Code for handling page requests.
        })
    );
}

Promises are kind of weird things to get your head around. They’re tailor-made for doing things asynchronously. You can set up two parameters; a success condition and a failure condition. If the success condition is executed, then we say the promise has resolved. If the failure condition is executed, then the promise rejects.

In my re-written code, I’m calling the success condition resolveWithResponse (and I haven’t bothered with a failure condition, tsk, tsk). I’m going to use resolveWithResponse in my promise everywhere that I used to have a return statement:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            new Promise( resolveWithResponse => {
                fetch(request)
                .then( responseFromFetch => {
                    const copy = responseFromFetch.clone();
                    try {
                        fetchEvent.waitUntil(
                            caches.open('pages')
                            then( pagesCache => {
                                return pagesCache.put(request, copy);
                            })
                        )
                    } catch(error) {
                        console.error(error);
                    }
                    resolveWithResponse(responseFromFetch);
                })
                .catch( fetchError => {
                    caches.match(request)
                    .then( responseFromCache => {
                        resolveWithResponse(
                            responseFromCache || caches.match('/offline')
                        );
                    })
                })
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

By itself, rewriting my code as a promise doesn’t change anything. Everything’s working the same as it did before. But now I can introduce the time-out logic. I’m going to put this inside my promise:

const timer = setTimeout( () => {
    caches.match(request)
    .then( responseFromCache => {
        if (responseFromCache) {
            resolveWithResponse(responseFromCache);
        }
    })
}, 3000);

If a request takes three seconds (3000 milliseconds), then that code will execute. At that point, the promise attempts to resolve with a response from the cache instead of waiting for the network. If there is a cached response, that’s what the user now gets. If there isn’t, then the wait continues for the network.

The last thing left for me to do is cancel the countdown to timing out if a network response does return within three seconds. So I put this in the then clause that’s triggered by a successful network response:

clearTimeout(timer);

I also add the clearTimeout statement to the catch clause that handles offline situations. Here’s the final code:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            new Promise( resolveWithResponse => {
                const timer = setTimeout( () => {
                    caches.match(request)
                    .then( responseFromCache => {
                        if (responseFromCache) {
                            resolveWithResponse(responseFromCache);
                        }
                    })
                }, 3000);
                fetch(request)
                .then( responseFromFetch => {
                    clearTimeout(timer);
                    const copy = responseFromFetch.clone();
                    try {
                        fetchEvent.waitUntil(
                            caches.open('pages')
                            then( pagesCache => {
                                return pagesCache.put(request, copy);
                            })
                        )
                    } catch(error) {
                        console.error(error);
                    }
                    resolveWithResponse(responseFromFetch);
                })
                .catch( fetchError => {
                    clearTimeout(timer);
                    caches.match(request)
                    .then( responseFromCache => {
                        resolveWithResponse(
                            responseFromCache || caches.match('/offline')
                        );
                    })
                })
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request)
        })
    }
});

That’s the JavaScript translation of this logic:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead (but if the request is taking too long, try to show a cached version of the page).

For everything else, try finding a cached version first, otherwise fetch it from the network.

Pros and cons

As with all service worker enhancements to a website, this strategy will do absolutely nothing for first-time visitors. If you’ve never visited my site before, you’ve got nothing cached. But the more you return to the site, the more your cache is primed for speedy retrieval.

I think that serving up a cached copy of a page when the network connection is flaky is a pretty good strategy …most of the time. If we’re talking about a blog post on this site, then sure, there won’t be much that the reader is missing out on—a fixed typo or ten; maybe some additional webmentions at the end of a post. But if we’re talking about the home page, then a reader with a flaky network connection might think there’s nothing new to read when they’re served up a stale version.

What I’d really like is some way to know—on the client side—whether or not the currently-loaded page came from a cache or from a network. Then I could add some kind of interface element that says, “Hey, this page might be stale—click here if you want to check for a fresher version.” I’d also need some way in the service worker to identify any requests originating from that interface element and make sure they always go out to the network.

I think that should be doable somehow. If you can think of a way to do it, please share it. Write a blog post and send me the link.

But even without the option to over-ride the time-out, I’m glad that I’m at least doing something to handle the lie-fi situation. Perhaps I should write a sequel to Going Offline called Still Online But Only In Theory Because The Connection Sucks.

CSS-only chat

A truly monstrous async web chat using no JS whatsoever on the frontend.

This is …I mean …yes, but …it …I …

Tuesday, May 7th, 2019

Test the impact of ads and third party scripts

This is a very useful new feature in Calibre, the performance monitoring tool. Now you can get data about just how much third-party scripts are affecting your site’s performance:

The best way of circumventing fear and anxiety around third party script performance is to capture metrics that clearly articulate their performance impact.

JavaScript pedalboard

Effects pedals in the browser, using the Web Audio API. Very cool!

Be sure to read Trys’s write-up too.

Friday, May 3rd, 2019

Create a responsive grid layout with no media queries, using CSS Grid - Andy Bell

CSS grid and custom properties really are a match made in heaven.

Thursday, May 2nd, 2019

Frameworking

There are many reasons to use a JavaScript framework like Vue, Angular, or React. Last year, Nicole asked for some of those reasons. Her question received many, many answers from people pointing out the benefits of using a framework. Interesingly, though, not a single one of those benefits was for end users.

(Mind you, if the framework is being used on the server to pre-render pages, then it’s a moot point—in that situation, it makes no difference to the end user whether you use a framework or not.)

Hidde recently tried using a client-side JavaScript framework for the first time and documented the process:

In the last few months I built my very first framework-based front-end, in Vue.js. I complemented it with a router, a store and a GraphQL library, in order to have, respectively, multiple (virtual) pages, globally shared data and a smart way to load new data in my templates.

It’s a very even-handed write-up. I highly recommend reading it. He describes the pros and cons of using a framework and using vanilla JavaScript:

I am glad I tried a framework and found its features were extremely helpful in creating a consistent interface for my users. My hope is though, that I won’t forget about vanilla. It’s perfectly valid to build a website with no or few dependencies.

Speaking of vanilla JavaScript… the blogging machine that is Chris Ferdinandi also wrote a comparison post recently, asking Why do people choose frameworks over vanilla JS? Again, it’s very even-handed and well worth a read. He readily concedes that if you’re working at scale, a framework is almost certainly a good idea:

If you’re building a large scale application (literally Facebook, Twitter, QuickBooks scale), the performance wins of a framework make the overhead worth it.

Alas, I’ve seen many, many framework-driven sites that are most definitely not that operating at that scale. Trys speaks the honest truth here:

We kid ourselves into thinking we’re building groundbreakingly complex systems that require bleeding-edge tools, but in reality, much of what we build is a way to render two things: a list, and a single item. Here are some users, here is a user. Here are your contacts, here are your messages with that contact. There ain’t much more to it than that.

Just the other day, I saw a new site launch that was mostly a marketing site—the home page weighed over five megabytes, two megabytes of which were taken up with JavaScript, and the whole thing required JavaScript to render text to the screen (I’m not going to link to it because I don’t want to engage in any kind of public shaming and finger-wagging).

I worry that all the perfectly valid (developer experience) reasons for using a framwork are outweighing the more important (user experience) reasons for avoiding shipping your dependencies to end users. Like Alex says:

If your conception of “DX” doesn’t include it, or isn’t subservient to the user experience, rethink.

And yes, I am going to take this opportunity to link once again to Alex’s article The “Developer Experience” Bait-and-Switch. Please read it if you haven’t already. Please re-read it if you have.

Anyway, my main reason for writing this is to point you to thoughtful posts like Hidde’s and Chris’s. I think it’s great to see people thoughtfully weighing up the pros and cons of choosing any particular technology—I’m a bit obsessed with the topic of evaluating technology.

If you’re weighing up the pros and cons of using, say, a particular JavaScript library or framework, that’s wonderful. My worry is that there are people working in front-end development who aren’t putting that level of thought into their technology choices, but are instead using a particular framework because it’s what they’re used to.

To quote Grace Hopper:

The most dangerous phrase in the language is, ‘We’ve always done it this way.’

The Simplest Ways to Handle HTML Includes | CSS-Tricks

Chris looks at all the different ways of working around the fact that HTML doesn’t do transclusion. Those ways include (hah!) Scott’s super clever technique and Trys’s little Sergey.

AMP as your web framework – The AMP Blog

The bait’n’switch is laid bare. First, AMP is positioned as a separate format. Then, only AMP pages are allowed ranking in the top stories carousel. Now, let’s pretend none of that ever happened and act as though AMP is just another framework. Oh, and those separate AMP pages that you made? Turns out that was all just “transitional” and you’re supposed to make your entire site in AMP now.

I would genuinely love to know how the Polymer team at Google feel about this pivot. Everything claimed in this blog post about AMP is actually true of Polymer (and other libraries of web components that don’t have the luxury of bribing developers with SEO ranking).

Some alternative facts from the introduction:

AMP isn’t another “channel” or “format” that’s somehow not the web.

Weird …because that’s exactly how it was sold to us (as a direct competitor to similar offerings from Apple and Facebook).

It’s not an SEO thing.

That it outright false. Ask any company actually using AMP why they use it.

It’s not a replacement for HTML.

And yet, the article goes on to try convince you to replace HTML with AMP.

Monday, April 29th, 2019

Naming things to improve accessibility

Some good advice from Hidde, based on his recent talk Six ways to make your site more accessible.

Friday, April 26th, 2019

How I failed the <a>

I think the situation that Remy outlines here is quite common (in client-rehydrated server-rendered pages), but what’s less common is Remy’s questioning and iteration.

So I now have a simple rule of thumb: if there’s an onClick, there’s got to be an anchor around the component.

Wednesday, April 24th, 2019

Preload, prefetch and other link tags: what they do and when to use them · PerfPerfPerf

Following on from Harry’s slides, here’s another round-up of thoserel attribute values that begin with pre.

More Than You Ever Wanted to Know About Resource Hints - Speaker Deck

Slides from Harry’s deep dive into rel values: preconnect, prefetch, and preload.

Who Are Design Systems For? | CSS-Tricks

Chris ponders the motivations behind companies sharing their design systems publicly. Personally, I’ve always seen it as a nice way of sharing work and saying “here’s what worked for us” without necessarily saying that anyone else should use the same system.

That said, I think Chris makes a good poin here:

My parting advice is actually to the makers of public design systems: clearly identify who this design system is for and what they are able to do with it.

Interview with Kyle Simpson (O’Reilly Fluent Conference 2016) - YouTube

I missed this when it was first posted three years ago, but now I think I’ll be revisiting this 12 minute interview every few months.

Everything that Kyle says here is spot on, nuanced, and thoughtful. He talks about abstraction, maintainability, learning, and complexity.

I want a transcript of the whole thing.

Front-end Developer Handbook 2019 - Learn the entire JavaScript, CSS and HTML development practice!

The 2019 edition of Cody Lindley’s book is a good jumping-off point with lots of links to handy resources.