Tags: work

822

sparkline

Sunday, June 16th, 2019

When should you be using Web Workers? — DasSur.ma

Although this piece is ostensibly about why we should be using web workers more, there’s a much, much bigger point about the growing power gap between the devices we developers use and the typical device used by the rest of the planet.

While we are getting faster flagship phones every cycle, the vast majority of people can’t afford these. The more affordable phones are stuck in the past and have highly fluctuating performance metrics. These low-end phones will mostly likely be used by the massive number of people coming online in the next couple of years. The gap between the fastest and the slowest phone is getting wider, and the median is going down.

Tuesday, June 11th, 2019

Baking accessibility into components: how frameworks help

A very thoughtful post by Hidde that draws a useful distinction between the “internals” of a component (the inner workings of a React component, Vue component, or web component) and the code that wires those components together (the business logic):

I really like working on the detailed stuff that affects users: useful keyboard navigation, sensible focus management, good semantics. But I appreciate not every developer does. I have started to think this may be a helpful separation: some people work on good internals and user experience, others on code that just uses those components and deals with data and caching and solid architecture. Both are valid things, both need love. Maybe we can use the divide for good?

Friday, June 7th, 2019

Jeremy Keith: Going offline - YouTube

Here’s the opening keynote I gave at Frontend United in Utrecht a few weeks back.

Monday, June 3rd, 2019

Resilient Management | A book for new managers in tech

I got a preview copy of this book and, my oh my, it is superb!

If your job involves dealing with humans (or if it might involve dealing with humans in the future), you’ll definitely want to read this.

Thursday, May 30th, 2019

Characteristics of a Strong Performance Culture | TimKadlec.com

Tim looks at the common traits of companies that have built a good culture of web performance:

  1. Top-down support
  2. Data-driven
  3. Clear targets
  4. Automation
  5. Knowledge sharing
  6. Culture of experimentation
  7. User focused, not tool focused

Few companies carry all of these characteristics, so it’s important not to get discouraged if you feel you’re missing a few of them. It’s a process and not a quick one. When I’ve asked folks at companies with all or most of these characteristics how long it took them to get to that point, the answer is typically in years, rarely months. Making meaningful changes to culture is much slower and far more difficult than making technical changes, but absolutely critical if you want those technical changes to have the impact you’re hoping for.

Wednesday, May 22nd, 2019

Complexity Explorables

A cornucopia of interactive visualisations. You control the horizontal. You control the vertical. Networks, flocking, emergence, diffusion …it’s all here.

Our intern program is returning for 2019 | Clearleft

Know any graduates who’d like to take part in a fun (paid) three month scheme at Clearleft? Send ‘em our way.

Tuesday, May 21st, 2019

Can “Indie” Social Media Save Us? | The New Yorker

This is a really great, balanced profile of the Indie Web movement. There’s thoughtful criticism alongside some well-deserved praise:

If we itemize the woes currently afflicting the major platforms, there’s a strong case to be made that the IndieWeb avoids them. When social-media servers aren’t controlled by a small number of massive public companies, the incentive to exploit users diminishes. The homegrown, community-oriented feel of the IndieWeb is superior to the vibe of anxious narcissism that’s degrading existing services.

Take Back Your Web - Tantek Çelik on Vimeo

Tantek’s barnstorming closing talk from Beyond Tellerrand. This is well worth 30 minutes of your time.

Own your domain. Own your content. Own your social connections. Own your reading experience. IndieWeb services, tools, and standards enable you to take back your web.

Going Critical — Melting Asphalt

This is an utterly fascinating interactive description of network effects, complete with Nicky Case style games. Play around with the parameters and suddenly you can see things “going viral”:

We can see similar things taking place in the landscape for ideas and inventions. Often the world isn’t ready for an idea, in which case it may be invented again and again without catching on. At the other extreme, the world may be fully primed for an invention (lots of latent demand), and so as soon as it’s born, it’s adopted by everyone. In-between are ideas that are invented in multiple places and spread locally, but not enough so that any individual version of the idea takes over the whole network all at once. In this latter category we find e.g. agriculture and writing, which were independently invented ~10 and ~3 times respectively.

Play around somewhere and you start to see why cities are where ideas have sex:

What I learned from the simulation above is that there are ideas and cultural practices that can take root and spread in a city that simply can’t spread out in the countryside. (Mathematically can’t.) These are the very same ideas and the very same kinds of people. It’s not that rural folks are e.g. “small-minded”; when exposed to one of these ideas, they’re exactly as likely to adopt it as someone in the city. Rather, it’s that the idea itself can’t go viral in the countryside because there aren’t as many connections along which it can spread.

This really is a wonderful web page! (and it’s licensed under a Creative Commons Zero licence)

We tend to think that if something’s a good idea, it will eventually reach everyone, and if something’s a bad idea, it will fizzle out. And while that’s certainly true at the extremes, in between are a bunch of ideas and practices that can only go viral in certain networks. I find this fascinating.

Thursday, May 16th, 2019

Welcome to Acccessible App | Accessible App

A very welcome project from Marcus Herrmann, documenting how to make common interaction patterns accessible in popular frameworks: Vue, React, and Angular.

Thursday, May 9th, 2019

Head’s role

I have a bittersweet feeling today. Danielle is moving on from Clearleft.

I used to get really down when people left. Over time I’ve learned not to take it as such a bad thing. I mean, of course it’s sad when someone moves on, but for them, it’s exciting. And I should be sharing in that excitement, not putting a damper on it.

Besides, people tend to stay at Clearleft for years and years—in the tech world, that’s unheard of. So it’s not really so terrible when they decide to head out to pastures new. They’ll always be Clearlefties. Just look at the lovely parting words from Harry, Paul, Ellen, and Ben:

Working at Clearleft was one of the best decisions I ever made. 6 years of some work that I’m most proud of, amongst some of the finest thinkers I’ve ever met.

(Side note: I’ve been thinking about starting a podcast where I chat to ex-Clearlefties. We could reflect on the past, look to the future, and generally just have a catch-up. Would that be self indulgent or interesting? Let me know what you think.)

So of course I’m going to miss working with Danielle, but as with other former ‘lefties, I’m genuinely excited to see what happens next for her. Clearleft has had an excellent three years of her time and now it’s another company’s turn.

In the spirit of “one door closes, another opens,” Danielle’s departure creates an opportunity for someone else. Fancy working at Clearleft? Well, we’re looking for a head of front-end development.

Do you remember back at the start of the year when we were hiring a front-end developer, and I wrote about writing job postings?

My first instinct was to look at other job ads and take my cue from them. But, let’s face it, most job ads are badly written, and prone to turning into laundry lists. So I decided to just write like I normally would. You know, like a human.

That worked out really well. We ended up hiring the ridiculously talented Trys Mudford. Success!

So I’ve taken the same approach with this job ad. I’ve tried to paint as clear and honest a picture as I can of what this role would entail. Like it says, there are three main parts to the job:

  • business support,
  • technical leadership, and
  • professional development.

Now, I could easily imagine someone reading the job description and thinking, “Nope! Not for me.” Let’s face it: There Will Be Meetings. And a whole lotta context switching:

Within the course of one day, you might go from thinking about thorny code problems to helping someone on your team with their career plans to figuring out how to land new business in a previously uncharted area of technology.

I can equally imagine someone reading that and thinking “Yes! This is what I’ve been waiting for.”

Oh, and in case you’re wondering why I’m not taking this role …well, in the short term, I will for a while, but I’d consider myself qualified for maybe one third to one half of the required tasks. Yes, I can handle the professional development side of things (in fact, I really, really enjoy that). I can handle some of the technical leadership stuff—if we’re talking about HTML, CSS, JavaScript, accessibility, and performance. But all of the back-of-the-front-end stuff—build tools, libraries, toolchains—is beyond me. And I think I’d be rubbish at the business support stuff, mostly because that doesn’t excite me much. But maybe it excites you! If so, you should apply.

I can picture a few scenarios where this role could be the ideal career move…

Suppose you’re a lead developer at a product company. You enjoy leading a team of devs, and you like setting the technical direction when it comes to the tools and techniques being used. But maybe you’re frustrated by always working on the same product with the same tech stack. The agency world, where every project is different, might be exactly what you’re looking for.

Or maybe you’re an accomplished and experienced front-end developer, freelancing and contracting for years. Perhaps you’re less enamoured with being so hands-on with the code all the time. Maybe you’ve realised that what you really enjoy is solving problems and evaluating techologies, and you’d be absolutely fine with having someone else take care of the implementation. Moving into a lead role like this might be the perfect way to make the best use of your time and have more impact with your decisions.

You get the idea. If any of this is sounding intriguing to you, you should definitely apply for the role. What do you have to lose?

Also, as it says in the job ad:

If you’re from a group that is under-represented in tech, please don’t hesitate to get in touch.

Distinguishing cached vs. network HTML requests in a Service Worker | Trys Mudford

Less than 24 hours after I put the call out for a solution to this gnarly service worker challenge, Trys has come up with a solution.

Wednesday, May 8th, 2019

Timing out

Service workers are great for creating a good user experience when someone is offline. Heck, the book I wrote about service workers is literally called Going Offline.

But in some ways, the offline experience is relatively easy to handle. It’s a binary situation; either you’re online or you’re offline. What’s more challenging—and probably more common—is the situation that Jake calls Lie-Fi. That’s when technically you’ve got a network connection …but it’s a shitty connection, like one bar of mobile signal. In that situation, because there’s technically a connection, the user gets a slow frustrating experience. Whatever code you’ve got in your service worker for handling offline situations will never get triggered. When you’re handling fetch events inside a service worker, there’s no automatic time-out.

But you can make one.

That’s what I’ve done recently here on adactio.com. Before showing you what I added to my service worker script to make that happen, let me walk you through my existing strategy for handling offline situations.

Service worker strategies

Alright, so in my service worker script, I’ve got a block of code for handling requests from fetch events:

addEventListener('fetch', fetchEvent => {
        const request = fetchEvent.request;
    // Do something with this request.
});

I’ve got two strategies in my code. One is for dealing with requests for pages:

if (request.headers.get('Accept').includes('text/html')) {
    // Code for handling page requests.
}

By adding an else clause I can have a different strategy for dealing with requests for anything else—images, style sheets, scripts, and so on:

if (request.headers.get('Accept').includes('text/html')) {
    // Code for handling page requests.
} else {
    // Code for handling everthing else.
}

For page requests, I’m going to try to go the network first:

fetchEvent.respondWith(
    fetch(request)
    .then( responseFromFetch => {
        return responseFromFetch;
    })

My logic is:

When someone requests a page, try to fetch it from the network.

If that doesn’t work, we’re in an offline situation. That triggers the catch clause. That’s where I have my offline strategy: show a custom offline page that I’ve previously cached (during the install event):

.catch( fetchError => {
    return caches.match('/offline');
})

Now my logic has been expanded to this:

When someone requests a page, try to fetch it from the network, but if that doesn’t work, show a custom offline page instead.

So my overall code for dealing with requests for pages looks like this:

if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
        fetch(request)
        .then( responseFromFetch => {
            return responseFromFetch;
        })
        .catch( fetchError => {
            return caches.match('/offline');
        })
    );
}

Now I can fill in the else statement that handles everything else—images, style sheets, scripts, and so on. Here my strategy is different. I’m looking in my caches first, and I only fetch the file from network if the file can’t be found in any cache:

caches.match(request)
.then( responseFromCache => {
    return responseFromCache || fetch(request);
})

Here’s all that fetch-handling code put together:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            fetch(request)
            .then( responseFromFetch => {
                return responseFromFetch;
            })
            .catch( fetchError => {
                return caches.match('/offline');
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

Good.

Cache as you go

Now I want to introduce an extra step in the part of the code where I deal with requests for pages. Whenever I fetch a page from the network, I’m going to take the opportunity to squirrel it away in a cache. I’m calling that cache “pages”. I’m imaginative like that.

fetchEvent.respondWith(
    fetch(request)
    .then( responseFromFetch => {
        const copy = responseFromFetch.clone();
        try {
            fetchEvent.waitUntil(
                caches.open('pages')
                .then( pagesCache => {
                    return pagesCache.put(request, copy);
                })
            )
        } catch(error) {
            console.error(error);
        }
        return responseFromFetch;
    })

You’ll notice that I can’t put the response itself (responseFromCache) into the cache. That’s a stream that I only get to use once. Instead I need to make a copy:

const copy = responseFromFetch.clone();

That’s what gets put in the pages cache:

fetchEvent.waitUntil(
    caches.open('pages')
    .then( pagesCache => {
        return pagesCache.put(request, copy);
    })
)

Now my logic for page requests has an extra piece to it:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, show a custom offline page instead.

Here’s my updated fetch-handling code:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            fetch(request)
            .then( responseFromFetch => {
                const copy = responseFromFetch.clone();
                try {
                    fetchEvent.waitUntil(
                        caches.open('pages')
                        .then( pagesCache => {
                            return pagesCache.put(request, copy);
                        })
                    )
                } catch(error) {
                    console.error(error);
                }
                return responseFromFetch;
            })
            .catch( fetchError => {
                return caches.match('/offline');
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

I call this the cache-as-you-go pattern. The more pages someone views on my site, the more pages they’ll have cached.

Now that there’s an ever-growing cache of previously visited pages, I can update my offline fallback. Currently, I reach straight for the custom offline page:

.catch( fetchError => {
    return caches.match('/offline');
})

But now I can try looking for a cached copy of the requested page first:

.catch( fetchError => {
    caches.match(request)
    .then( responseFromCache => {
        return responseFromCache || caches.match('/offline');
    })
});

Now my offline logic is expanded:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead.

I can also access this ever-growing cache of pages from my custom offline page to show people which pages they can revisit, even if there’s no internet connection.

So far, so good. Everything I’ve outlined so far is a good robust strategy for handling offline situations. Now I’m going to deal with the lie-fi situation, and it’s that cache-as-you-go strategy that sets me up nicely.

Timing out

I want to throw this addition into my logic:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead (but if the request is taking too long, try to show a cached version of the page).

The first thing I’m going to do is rewrite my code a bit. If the fetch event is for a page, I’m going to respond with a promise:

if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
        new Promise( resolveWithResponse => {
            // Code for handling page requests.
        })
    );
}

Promises are kind of weird things to get your head around. They’re tailor-made for doing things asynchronously. You can set up two parameters; a success condition and a failure condition. If the success condition is executed, then we say the promise has resolved. If the failure condition is executed, then the promise rejects.

In my re-written code, I’m calling the success condition resolveWithResponse (and I haven’t bothered with a failure condition, tsk, tsk). I’m going to use resolveWithResponse in my promise everywhere that I used to have a return statement:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            new Promise( resolveWithResponse => {
                fetch(request)
                .then( responseFromFetch => {
                    const copy = responseFromFetch.clone();
                    try {
                        fetchEvent.waitUntil(
                            caches.open('pages')
                            then( pagesCache => {
                                return pagesCache.put(request, copy);
                            })
                        )
                    } catch(error) {
                        console.error(error);
                    }
                    resolveWithResponse(responseFromFetch);
                })
                .catch( fetchError => {
                    caches.match(request)
                    .then( responseFromCache => {
                        resolveWithResponse(
                            responseFromCache || caches.match('/offline')
                        );
                    })
                })
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

By itself, rewriting my code as a promise doesn’t change anything. Everything’s working the same as it did before. But now I can introduce the time-out logic. I’m going to put this inside my promise:

const timer = setTimeout( () => {
    caches.match(request)
    .then( responseFromCache => {
        if (responseFromCache) {
            resolveWithResponse(responseFromCache);
        }
    })
}, 3000);

If a request takes three seconds (3000 milliseconds), then that code will execute. At that point, the promise attempts to resolve with a response from the cache instead of waiting for the network. If there is a cached response, that’s what the user now gets. If there isn’t, then the wait continues for the network.

The last thing left for me to do is cancel the countdown to timing out if a network response does return within three seconds. So I put this in the then clause that’s triggered by a successful network response:

clearTimeout(timer);

I also add the clearTimeout statement to the catch clause that handles offline situations. Here’s the final code:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            new Promise( resolveWithResponse => {
                const timer = setTimeout( () => {
                    caches.match(request)
                    .then( responseFromCache => {
                        if (responseFromCache) {
                            resolveWithResponse(responseFromCache);
                        }
                    })
                }, 3000);
                fetch(request)
                .then( responseFromFetch => {
                    clearTimeout(timer);
                    const copy = responseFromFetch.clone();
                    try {
                        fetchEvent.waitUntil(
                            caches.open('pages')
                            then( pagesCache => {
                                return pagesCache.put(request, copy);
                            })
                        )
                    } catch(error) {
                        console.error(error);
                    }
                    resolveWithResponse(responseFromFetch);
                })
                .catch( fetchError => {
                    clearTimeout(timer);
                    caches.match(request)
                    .then( responseFromCache => {
                        resolveWithResponse(
                            responseFromCache || caches.match('/offline')
                        );
                    })
                })
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request)
        })
    }
});

That’s the JavaScript translation of this logic:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead (but if the request is taking too long, try to show a cached version of the page).

For everything else, try finding a cached version first, otherwise fetch it from the network.

Pros and cons

As with all service worker enhancements to a website, this strategy will do absolutely nothing for first-time visitors. If you’ve never visited my site before, you’ve got nothing cached. But the more you return to the site, the more your cache is primed for speedy retrieval.

I think that serving up a cached copy of a page when the network connection is flaky is a pretty good strategy …most of the time. If we’re talking about a blog post on this site, then sure, there won’t be much that the reader is missing out on—a fixed typo or ten; maybe some additional webmentions at the end of a post. But if we’re talking about the home page, then a reader with a flaky network connection might think there’s nothing new to read when they’re served up a stale version.

What I’d really like is some way to know—on the client side—whether or not the currently-loaded page came from a cache or from a network. Then I could add some kind of interface element that says, “Hey, this page might be stale—click here if you want to check for a fresher version.” I’d also need some way in the service worker to identify any requests originating from that interface element and make sure they always go out to the network.

I think that should be doable somehow. If you can think of a way to do it, please share it. Write a blog post and send me the link.

But even without the option to over-ride the time-out, I’m glad that I’m at least doing something to handle the lie-fi situation. Perhaps I should write a sequel to Going Offline called Still Online But Only In Theory Because The Connection Sucks.

Thursday, May 2nd, 2019

Frameworking

There are many reasons to use a JavaScript framework like Vue, Angular, or React. Last year, Nicole asked for some of those reasons. Her question received many, many answers from people pointing out the benefits of using a framework. Interesingly, though, not a single one of those benefits was for end users.

(Mind you, if the framework is being used on the server to pre-render pages, then it’s a moot point—in that situation, it makes no difference to the end user whether you use a framework or not.)

Hidde recently tried using a client-side JavaScript framework for the first time and documented the process:

In the last few months I built my very first framework-based front-end, in Vue.js. I complemented it with a router, a store and a GraphQL library, in order to have, respectively, multiple (virtual) pages, globally shared data and a smart way to load new data in my templates.

It’s a very even-handed write-up. I highly recommend reading it. He describes the pros and cons of using a framework and using vanilla JavaScript:

I am glad I tried a framework and found its features were extremely helpful in creating a consistent interface for my users. My hope is though, that I won’t forget about vanilla. It’s perfectly valid to build a website with no or few dependencies.

Speaking of vanilla JavaScript… the blogging machine that is Chris Ferdinandi also wrote a comparison post recently, asking Why do people choose frameworks over vanilla JS? Again, it’s very even-handed and well worth a read. He readily concedes that if you’re working at scale, a framework is almost certainly a good idea:

If you’re building a large scale application (literally Facebook, Twitter, QuickBooks scale), the performance wins of a framework make the overhead worth it.

Alas, I’ve seen many, many framework-driven sites that are most definitely not that operating at that scale. Trys speaks the honest truth here:

We kid ourselves into thinking we’re building groundbreakingly complex systems that require bleeding-edge tools, but in reality, much of what we build is a way to render two things: a list, and a single item. Here are some users, here is a user. Here are your contacts, here are your messages with that contact. There ain’t much more to it than that.

Just the other day, I saw a new site launch that was mostly a marketing site—the home page weighed over five megabytes, two megabytes of which were taken up with JavaScript, and the whole thing required JavaScript to render text to the screen (I’m not going to link to it because I don’t want to engage in any kind of public shaming and finger-wagging).

I worry that all the perfectly valid (developer experience) reasons for using a framwork are outweighing the more important (user experience) reasons for avoiding shipping your dependencies to end users. Like Alex says:

If your conception of “DX” doesn’t include it, or isn’t subservient to the user experience, rethink.

And yes, I am going to take this opportunity to link once again to Alex’s article The “Developer Experience” Bait-and-Switch. Please read it if you haven’t already. Please re-read it if you have.

Anyway, my main reason for writing this is to point you to thoughtful posts like Hidde’s and Chris’s. I think it’s great to see people thoughtfully weighing up the pros and cons of choosing any particular technology—I’m a bit obsessed with the topic of evaluating technology.

If you’re weighing up the pros and cons of using, say, a particular JavaScript library or framework, that’s wonderful. My worry is that there are people working in front-end development who aren’t putting that level of thought into their technology choices, but are instead using a particular framework because it’s what they’re used to.

To quote Grace Hopper:

The most dangerous phrase in the language is, ‘We’ve always done it this way.’

AMP as your web framework – The AMP Blog

The bait’n’switch is laid bare. First, AMP is positioned as a separate format. Then, only AMP pages are allowed ranking in the top stories carousel. Now, let’s pretend none of that ever happened and act as though AMP is just another framework. Oh, and those separate AMP pages that you made? Turns out that was all just “transitional” and you’re supposed to make your entire site in AMP now.

I would genuinely love to know how the Polymer team at Google feel about this pivot. Everything claimed in this blog post about AMP is actually true of Polymer (and other libraries of web components that don’t have the luxury of bribing developers with SEO ranking).

Some alternative facts from the introduction:

AMP isn’t another “channel” or “format” that’s somehow not the web.

Weird …because that’s exactly how it was sold to us (as a direct competitor to similar offerings from Apple and Facebook).

It’s not an SEO thing.

That it outright false. Ask any company actually using AMP why they use it.

It’s not a replacement for HTML.

And yet, the article goes on to try convince you to replace HTML with AMP.

Wednesday, April 24th, 2019

Interview with Kyle Simpson (O’Reilly Fluent Conference 2016) - YouTube

I missed this when it was first posted three years ago, but now I think I’ll be revisiting this 12 minute interview every few months.

Everything that Kyle says here is spot on, nuanced, and thoughtful. He talks about abstraction, maintainability, learning, and complexity.

I want a transcript of the whole thing.

Monday, April 15th, 2019

James Bridle / New Ways of Seeing

James has a new four part series on Radio 4. Episodes will be available for huffduffing shortly after broadcast.

New Ways of Seeing considers the impact of digital technologies on the way we see, understand, and interact with the world. Building on John Berger’s seminal Ways of Seeing from 1972, the show explores network infrastructures, digital images, systemic bias, education and the environment, in conversation with a number of contemporary art practitioners.

Saturday, April 13th, 2019

Offline fallback page with service worker - Modern Web Development: Tales of a Developer Advocate by Paul Kinlan

Paul describes a fairly straightforward service worker recipe: a custom offline page for failed requests.

Friday, April 12th, 2019

Disenchantment - Tim Novis

I would urge front-end developers to take a step back, breathe, and reassess. Let’s stop over engineering for the sake of it. Let’s think what we can do with the basic tools, progressive enhancement and a simpler approach to building websites. There are absolutely valid usecases for SPAs, React, et al. and I’ll continue to use these tools reguarly and when it’s necessary, I’m just not sure that’s 100% of the time.