Tags: code

439

sparkline

Wednesday, June 12th, 2019

Breaking the physical limits of fonts

This broke my brain.

The challenge: in the fewest resources possible, render meaningful text.

  • How small can a font really go?
  • How many bytes of memory would you need (to store it and run it?)
  • How much code would it take to express it?

Lets see just how far we can take this!

Tuesday, June 11th, 2019

Mornington Crescent - Esolang

A (possibly) Turing complete language:

As the validity and the semantics of a program depend on the structure of the London underground system, which is administered by London Underground Ltd, a subsidiary of Transport for London, who are likely unaware of the existence of this programming language, its future compatibility is uncertain. Programs may become invalid or subtly wrong as the transport company expands or retires some of the network, reroutes lines or renames stations. Features may be removed with no prior consultation with the programming community. For all we know, Mornington Crescent itself may at some point be closed, at which point this programming language will cease to exist.

Monday, June 10th, 2019

(Open) source of anxiety – Increment: Open Source

If we continue as we are, who will maintain the maintainers?

In the world of open source, we tend to give plaudits and respect to makers …but maintainers really need our support and understanding.

Users and new contributors often don’t see, much less think about, the nontechnical issues—like mental health, or work-life balance, or project governance—that maintainers face. And without adequate support, our digital infrastructure, as well as the people who make it run, suffer.

Making QR codes with cloud functions • tommorris.org

Tom makes an endpoint for generating QR codes so you don’t have to rely on the Google Charts API.

He also provides a good definition of “serverless”:

Now, serverless is a very silly buzzword dreamed up by someone from the consultant class who love coming up with terrible names, so I promise I won’t use it any further. Your code obviously run on a server. It just means it runs on a server someone else manages.

Amazon call it a ‘Lambda Function’. Google call it a ‘Cloud Function’. Microsoft Azure call it simply a ‘Function’. But none of those are very descriptive, because, well, anyone who writes any kind of programming language generally writes functions pretty much all the time in much the same way as anyone who writes English writes paragraphs, and we don’t call our blogging software “Cloud Paragraphs”. (Someone will now, I’m guessing.)

Sunday, June 9th, 2019

German Naming Convention

Don’t write fopen when you can write openFile. Write throwValidationError and not throwVE. Call that name function and not fct. That’s German naming convention. Do this and your readers will appreciate it.

Friday, June 7th, 2019

mathieudutour/medium-to-own-blog: Switch from Medium to your own blog in a few minutes

Following on from Stackbit’s tool, here’s another (more code-heavy) way of migrating from Ev’s blog to your own site.

Wednesday, May 29th, 2019

Thursday, May 16th, 2019

TIL (Today I learned) - Manuel Matuzović

At Clearleft, we’re always saying “Everything is a tiny lesson!”, so I love, love, love this bit of Manuel’s website where notes down short code snippets of little things he learns.

Friday, May 10th, 2019

HTML Symbols, Entities, Characters and Codes

For all your copying and pasting needs:

A delightful reference for HTML Symbols, Entities and ASCII Character Codes

Thursday, May 9th, 2019

Distinguishing cached vs. network HTML requests in a Service Worker | Trys Mudford

Less than 24 hours after I put the call out for a solution to this gnarly service worker challenge, Trys has come up with a solution.

Wednesday, May 8th, 2019

Timing out

Service workers are great for creating a good user experience when someone is offline. Heck, the book I wrote about service workers is literally called Going Offline.

But in some ways, the offline experience is relatively easy to handle. It’s a binary situation; either you’re online or you’re offline. What’s more challenging—and probably more common—is the situation that Jake calls Lie-Fi. That’s when technically you’ve got a network connection …but it’s a shitty connection, like one bar of mobile signal. In that situation, because there’s technically a connection, the user gets a slow frustrating experience. Whatever code you’ve got in your service worker for handling offline situations will never get triggered. When you’re handling fetch events inside a service worker, there’s no automatic time-out.

But you can make one.

That’s what I’ve done recently here on adactio.com. Before showing you what I added to my service worker script to make that happen, let me walk you through my existing strategy for handling offline situations.

Service worker strategies

Alright, so in my service worker script, I’ve got a block of code for handling requests from fetch events:

addEventListener('fetch', fetchEvent => {
        const request = fetchEvent.request;
    // Do something with this request.
});

I’ve got two strategies in my code. One is for dealing with requests for pages:

if (request.headers.get('Accept').includes('text/html')) {
    // Code for handling page requests.
}

By adding an else clause I can have a different strategy for dealing with requests for anything else—images, style sheets, scripts, and so on:

if (request.headers.get('Accept').includes('text/html')) {
    // Code for handling page requests.
} else {
    // Code for handling everthing else.
}

For page requests, I’m going to try to go the network first:

fetchEvent.respondWith(
    fetch(request)
    .then( responseFromFetch => {
        return responseFromFetch;
    })

My logic is:

When someone requests a page, try to fetch it from the network.

If that doesn’t work, we’re in an offline situation. That triggers the catch clause. That’s where I have my offline strategy: show a custom offline page that I’ve previously cached (during the install event):

.catch( fetchError => {
    return caches.match('/offline');
})

Now my logic has been expanded to this:

When someone requests a page, try to fetch it from the network, but if that doesn’t work, show a custom offline page instead.

So my overall code for dealing with requests for pages looks like this:

if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
        fetch(request)
        .then( responseFromFetch => {
            return responseFromFetch;
        })
        .catch( fetchError => {
            return caches.match('/offline');
        })
    );
}

Now I can fill in the else statement that handles everything else—images, style sheets, scripts, and so on. Here my strategy is different. I’m looking in my caches first, and I only fetch the file from network if the file can’t be found in any cache:

caches.match(request)
.then( responseFromCache => {
    return responseFromCache || fetch(request);
})

Here’s all that fetch-handling code put together:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            fetch(request)
            .then( responseFromFetch => {
                return responseFromFetch;
            })
            .catch( fetchError => {
                return caches.match('/offline');
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

Good.

Cache as you go

Now I want to introduce an extra step in the part of the code where I deal with requests for pages. Whenever I fetch a page from the network, I’m going to take the opportunity to squirrel it away in a cache. I’m calling that cache “pages”. I’m imaginative like that.

fetchEvent.respondWith(
    fetch(request)
    .then( responseFromFetch => {
        const copy = responseFromFetch.clone();
        try {
            fetchEvent.waitUntil(
                caches.open('pages')
                .then( pagesCache => {
                    return pagesCache.put(request, copy);
                })
            )
        } catch(error) {
            console.error(error);
        }
        return responseFromFetch;
    })

You’ll notice that I can’t put the response itself (responseFromCache) into the cache. That’s a stream that I only get to use once. Instead I need to make a copy:

const copy = responseFromFetch.clone();

That’s what gets put in the pages cache:

fetchEvent.waitUntil(
    caches.open('pages')
    .then( pagesCache => {
        return pagesCache.put(request, copy);
    })
)

Now my logic for page requests has an extra piece to it:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, show a custom offline page instead.

Here’s my updated fetch-handling code:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            fetch(request)
            .then( responseFromFetch => {
                const copy = responseFromFetch.clone();
                try {
                    fetchEvent.waitUntil(
                        caches.open('pages')
                        .then( pagesCache => {
                            return pagesCache.put(request, copy);
                        })
                    )
                } catch(error) {
                    console.error(error);
                }
                return responseFromFetch;
            })
            .catch( fetchError => {
                return caches.match('/offline');
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

I call this the cache-as-you-go pattern. The more pages someone views on my site, the more pages they’ll have cached.

Now that there’s an ever-growing cache of previously visited pages, I can update my offline fallback. Currently, I reach straight for the custom offline page:

.catch( fetchError => {
    return caches.match('/offline');
})

But now I can try looking for a cached copy of the requested page first:

.catch( fetchError => {
    caches.match(request)
    .then( responseFromCache => {
        return responseFromCache || caches.match('/offline');
    })
});

Now my offline logic is expanded:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead.

I can also access this ever-growing cache of pages from my custom offline page to show people which pages they can revisit, even if there’s no internet connection.

So far, so good. Everything I’ve outlined so far is a good robust strategy for handling offline situations. Now I’m going to deal with the lie-fi situation, and it’s that cache-as-you-go strategy that sets me up nicely.

Timing out

I want to throw this addition into my logic:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead (but if the request is taking too long, try to show a cached version of the page).

The first thing I’m going to do is rewrite my code a bit. If the fetch event is for a page, I’m going to respond with a promise:

if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
        new Promise( resolveWithResponse => {
            // Code for handling page requests.
        })
    );
}

Promises are kind of weird things to get your head around. They’re tailor-made for doing things asynchronously. You can set up two parameters; a success condition and a failure condition. If the success condition is executed, then we say the promise has resolved. If the failure condition is executed, then the promise rejects.

In my re-written code, I’m calling the success condition resolveWithResponse (and I haven’t bothered with a failure condition, tsk, tsk). I’m going to use resolveWithResponse in my promise everywhere that I used to have a return statement:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            new Promise( resolveWithResponse => {
                fetch(request)
                .then( responseFromFetch => {
                    const copy = responseFromFetch.clone();
                    try {
                        fetchEvent.waitUntil(
                            caches.open('pages')
                            then( pagesCache => {
                                return pagesCache.put(request, copy);
                            })
                        )
                    } catch(error) {
                        console.error(error);
                    }
                    resolveWithResponse(responseFromFetch);
                })
                .catch( fetchError => {
                    caches.match(request)
                    .then( responseFromCache => {
                        resolveWithResponse(
                            responseFromCache || caches.match('/offline')
                        );
                    })
                })
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

By itself, rewriting my code as a promise doesn’t change anything. Everything’s working the same as it did before. But now I can introduce the time-out logic. I’m going to put this inside my promise:

const timer = setTimeout( () => {
    caches.match(request)
    .then( responseFromCache => {
        if (responseFromCache) {
            resolveWithResponse(responseFromCache);
        }
    })
}, 3000);

If a request takes three seconds (3000 milliseconds), then that code will execute. At that point, the promise attempts to resolve with a response from the cache instead of waiting for the network. If there is a cached response, that’s what the user now gets. If there isn’t, then the wait continues for the network.

The last thing left for me to do is cancel the countdown to timing out if a network response does return within three seconds. So I put this in the then clause that’s triggered by a successful network response:

clearTimeout(timer);

I also add the clearTimeout statement to the catch clause that handles offline situations. Here’s the final code:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            new Promise( resolveWithResponse => {
                const timer = setTimeout( () => {
                    caches.match(request)
                    .then( responseFromCache => {
                        if (responseFromCache) {
                            resolveWithResponse(responseFromCache);
                        }
                    })
                }, 3000);
                fetch(request)
                .then( responseFromFetch => {
                    clearTimeout(timer);
                    const copy = responseFromFetch.clone();
                    try {
                        fetchEvent.waitUntil(
                            caches.open('pages')
                            then( pagesCache => {
                                return pagesCache.put(request, copy);
                            })
                        )
                    } catch(error) {
                        console.error(error);
                    }
                    resolveWithResponse(responseFromFetch);
                })
                .catch( fetchError => {
                    clearTimeout(timer);
                    caches.match(request)
                    .then( responseFromCache => {
                        resolveWithResponse(
                            responseFromCache || caches.match('/offline')
                        );
                    })
                })
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request)
        })
    }
});

That’s the JavaScript translation of this logic:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead (but if the request is taking too long, try to show a cached version of the page).

For everything else, try finding a cached version first, otherwise fetch it from the network.

Pros and cons

As with all service worker enhancements to a website, this strategy will do absolutely nothing for first-time visitors. If you’ve never visited my site before, you’ve got nothing cached. But the more you return to the site, the more your cache is primed for speedy retrieval.

I think that serving up a cached copy of a page when the network connection is flaky is a pretty good strategy …most of the time. If we’re talking about a blog post on this site, then sure, there won’t be much that the reader is missing out on—a fixed typo or ten; maybe some additional webmentions at the end of a post. But if we’re talking about the home page, then a reader with a flaky network connection might think there’s nothing new to read when they’re served up a stale version.

What I’d really like is some way to know—on the client side—whether or not the currently-loaded page came from a cache or from a network. Then I could add some kind of interface element that says, “Hey, this page might be stale—click here if you want to check for a fresher version.” I’d also need some way in the service worker to identify any requests originating from that interface element and make sure they always go out to the network.

I think that should be doable somehow. If you can think of a way to do it, please share it. Write a blog post and send me the link.

But even without the option to over-ride the time-out, I’m glad that I’m at least doing something to handle the lie-fi situation. Perhaps I should write a sequel to Going Offline called Still Online But Only In Theory Because The Connection Sucks.

CSS-only chat

A truly monstrous async web chat using no JS whatsoever on the frontend.

This is …I mean …yes, but …it …I …

Tuesday, April 16th, 2019

A Webring Kit | Max Böck - Frontend Web Developer

Inspired by Charlie, here’s a straightforward bit of code for starting or joining your own webring.

Saturday, April 13th, 2019

Offline fallback page with service worker - Modern Web Development: Tales of a Developer Advocate by Paul Kinlan

Paul describes a fairly straightforward service worker recipe: a custom offline page for failed requests.

Inline an SVG file in HTML, declaratively & asynchronously!

Woah! This is one smart hack!

Scott has figured out a way to get all the benefits of pointing to an external SVG file …that then gets embedded. This means you can get all the styling and scripting benefits that only apply to embedded SVGs (like using fill).

The fallback is very graceful indeed: you still get the SVG (just not embedded).

Now imagine using this technique for chunks of HTML too …transclusion, baby!

Thursday, April 4th, 2019

A progressive disclosure component - Andy Bell

This is a really nice write-up of creating an accessible progressive disclosure widget (a show/hide toggle).

Where it gets really interesting is when Andy shows how it could all be encapsulated into a web component with a progressive enhancement mindset

Monday, March 18th, 2019

HTML periodical table (built with CSS grid)

This is a nifty visualisation by Hui Jing. It’s really handy to have elements categorised like this:

  • Root elements
  • Scripting
  • Interactive elements
  • Document metadata
  • Edits
  • Tabular data
  • Grouping content
  • Embedded content
  • Forms
  • Sections
  • Text-level semantics

Wednesday, March 13th, 2019

if statements and for loops in CSS - QuirksBlog

Personally, I find ppk’s comparison here to be spot on. I think that CSS can be explained in terms of programming concepts like if statements and for loops, if you squint at it just right.

This is something I’ve written about before.

Unpoly: Unobtrusive JavaScript framework

This looks like it could be an interesting library of interface patterns.

Saturday, March 9th, 2019

Updating email addresses with Mailchimp’s API

I’ve been using Mailchimp for years now to send out a weekly newsletter from The Session. But I never visit the Mailchimp website. Instead, I use the API to create a campaign each week, and then send it out. I also use the API whenever a member of The Session updates their email preferences (or changes their details).

I got an email from Mailchimp that their old API was being deprecated and I’d need to update to their more recent one. The code I was using had been happily running for about seven years, but now I’d have to change it.

Luckily, Drew has written a really handy Mailchimp API wrapper for PHP, the language that The Session’s codebase is in. Thanks, Drew! I downloaded that wrapper and updated my code accordingly.

Everything went pretty smoothly. I was able to create campaigns, send campaigns, add new subscribers, and delete subscribers. But I ran into an issue when I wanted to update someone’s email address (on The Session, you can edit your details at any time, including your email address).

Here’s the set up:

use \DrewM\MailChimp\MailChimp;
$MailChimp = new MailChimp('abc123abc123abc123abc123abc123-us1');
$list_id = 'b1234346';
$subscriber_hash = $MailChimp -> subscriberHash('currentemail@example.com');
$endpoint = 'lists/'.$listID.'/members/'.$subscriber_hash;

Now to update details, according to the API, I can use the patch method on that endpoint:

$MailChimp -> patch($endpoint, [
    'email_address' => 'newemail@example.com'
]);

But that doesn’t work. Mailchimp effectively treats email addresses as unique IDs for subscribers. So the only way to change someone’s email address appears to be to delete them, and then subscribe them fresh with the new email address:

$MailChimp -> delete($endpoint);
$newendpoint = 'lists/'.$listID.'/members';
$MailChimp -> post($newendpoint, [
    'email_address' => 'newemail@example.com',
    'status' => 'subscribed'
]);

That’s somewhat annoying, as the previous version of the API allowed email addresses to be updated, but this workaround isn’t too arduous.

Anyway, I figured it share this just in case it was useful for anyone else migrating to the newer API.

Update: Belay that. Turns out that you can update email addresses, but you have to be sure to include the status value:

$MailChimp -> patch($endpoint, [
    'email_address' => 'newemail@example.com',
    'status' => 'subscribed'
]);

Okay, that’s a lot more straightforward. Ignore everything I said.