Tags: ie

1975

sparkline

Tuesday, May 21st, 2019

Can “Indie” Social Media Save Us? | The New Yorker

This is a really great, balanced profile of the Indie Web movement. There’s thoughtful criticism alongside some well-deserved praise:

If we itemize the woes currently afflicting the major platforms, there’s a strong case to be made that the IndieWeb avoids them. When social-media servers aren’t controlled by a small number of massive public companies, the incentive to exploit users diminishes. The homegrown, community-oriented feel of the IndieWeb is superior to the vibe of anxious narcissism that’s degrading existing services.

Take Back Your Web - Tantek Çelik on Vimeo

Tantek’s barnstorming closing talk from Beyond Tellerrand. This is well worth 30 minutes of your time.

Own your domain. Own your content. Own your social connections. Own your reading experience. IndieWeb services, tools, and standards enable you to take back your web.

The Training Commission

Coming to your inbox soon:

The Training Commission is a speculative fiction email newsletter about the compromises and consequences of using technology to reckon with collective trauma. Several years after a period of civil unrest and digital blackouts in the United States, a truth and reconciliation process has led to a major restructuring of the federal government, major tech companies, and the criminal justice system.

Monday, May 20th, 2019

Science Fiction Doesn’t Have to Be Dystopian | The New Yorker

Ted Chiang has new collection out‽ Why did nobody tell me‽

Okay, well, technically this is Joyce Carol Oates telling me. In any case …woo-hoo!!!

Friday, May 17th, 2019

Eintrag “Take back your web – Tantek Çelik @ Beyond Tellerrand Conference, Düsseldorf 2019” beim Webrocker

Tom shares his thoughts on Tantek’s excellent closing talk at Beyond Tellerrand this week:

Yes, the message of this rather sombre closing talk of this year’s Beyond Tellerrand Conference Düsseldorf is important. Watch it. And then go out, take care of yourself and others, away from the screen. And then come back and publish your own stuff on your own site. Still not convinced? ok, then, please read Matthias Ott’s great article (published on his own site btw), and then start using your own site.

Thursday, May 16th, 2019

Welcome to Acccessible App | Accessible App

A very welcome project from Marcus Herrmann, documenting how to make common interaction patterns accessible in popular frameworks: Vue, React, and Angular.

Sunday, May 12th, 2019

IndieWebCamp Düsseldorf 2019 | 2 | Flickr

Today was a good day …and here are the very good photos.

IMG_2295

Into the Personal-Website-Verse · Matthias Ott – User Experience Designer

There is one alternative to social media sites and publishing platforms that has been around since the early, innocent days of the web. It is an alternative that provides immense freedom and control: The personal website. It’s a place to write, create, and share whatever you like, without the need to ask for anyone’s permission.

A wonderful and inspiring call to arms for having your own website—a place to express yourself, and a playground, all rolled into one.

Building and maintaining your personal website is an investment that is challenging and can feel laborious at times. Be prepared for that. But what you will learn along the way does easily make up for all the effort and makes the journey more than worthwhile.

Brendan Dawes - Permission to Write Stuff

A beautiful post by Brendan, comparing the ease of publishing on the web to the original Flip camera:

Right now there’s a real renaissance of people getting back to blogging on their own sites again. If you’ve been putting it off, think about the beauty and simplicity of that red button, press it, and try and help make the web the place it was always meant to be.

Friday, May 10th, 2019

HTML Symbols, Entities, Characters and Codes

For all your copying and pasting needs:

A delightful reference for HTML Symbols, Entities and ASCII Character Codes

Wednesday, May 8th, 2019

Timing out

Service workers are great for creating a good user experience when someone is offline. Heck, the book I wrote about service workers is literally called Going Offline.

But in some ways, the offline experience is relatively easy to handle. It’s a binary situation; either you’re online or you’re offline. What’s more challenging—and probably more common—is the situation that Jake calls Lie-Fi. That’s when technically you’ve got a network connection …but it’s a shitty connection, like one bar of mobile signal. In that situation, because there’s technically a connection, the user gets a slow frustrating experience. Whatever code you’ve got in your service worker for handling offline situations will never get triggered. When you’re handling fetch events inside a service worker, there’s no automatic time-out.

But you can make one.

That’s what I’ve done recently here on adactio.com. Before showing you what I added to my service worker script to make that happen, let me walk you through my existing strategy for handling offline situations.

Service worker strategies

Alright, so in my service worker script, I’ve got a block of code for handling requests from fetch events:

addEventListener('fetch', fetchEvent => {
        const request = fetchEvent.request;
    // Do something with this request.
});

I’ve got two strategies in my code. One is for dealing with requests for pages:

if (request.headers.get('Accept').includes('text/html')) {
    // Code for handling page requests.
}

By adding an else clause I can have a different strategy for dealing with requests for anything else—images, style sheets, scripts, and so on:

if (request.headers.get('Accept').includes('text/html')) {
    // Code for handling page requests.
} else {
    // Code for handling everthing else.
}

For page requests, I’m going to try to go the network first:

fetchEvent.respondWith(
    fetch(request)
    .then( responseFromFetch => {
        return responseFromFetch;
    })

My logic is:

When someone requests a page, try to fetch it from the network.

If that doesn’t work, we’re in an offline situation. That triggers the catch clause. That’s where I have my offline strategy: show a custom offline page that I’ve previously cached (during the install event):

.catch( fetchError => {
    return caches.match('/offline');
})

Now my logic has been expanded to this:

When someone requests a page, try to fetch it from the network, but if that doesn’t work, show a custom offline page instead.

So my overall code for dealing with requests for pages looks like this:

if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
        fetch(request)
        .then( responseFromFetch => {
            return responseFromFetch;
        })
        .catch( fetchError => {
            return caches.match('/offline');
        })
    );
}

Now I can fill in the else statement that handles everything else—images, style sheets, scripts, and so on. Here my strategy is different. I’m looking in my caches first, and I only fetch the file from network if the file can’t be found in any cache:

caches.match(request)
.then( responseFromCache => {
    return responseFromCache || fetch(request);
})

Here’s all that fetch-handling code put together:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            fetch(request)
            .then( responseFromFetch => {
                return responseFromFetch;
            })
            .catch( fetchError => {
                return caches.match('/offline');
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

Good.

Cache as you go

Now I want to introduce an extra step in the part of the code where I deal with requests for pages. Whenever I fetch a page from the network, I’m going to take the opportunity to squirrel it away in a cache. I’m calling that cache “pages”. I’m imaginative like that.

fetchEvent.respondWith(
    fetch(request)
    .then( responseFromFetch => {
        const copy = responseFromFetch.clone();
        try {
            fetchEvent.waitUntil(
                caches.open('pages')
                .then( pagesCache => {
                    return pagesCache.put(request, copy);
                })
            )
        } catch(error) {
            console.error(error);
        }
        return responseFromFetch;
    })

You’ll notice that I can’t put the response itself (responseFromCache) into the cache. That’s a stream that I only get to use once. Instead I need to make a copy:

const copy = responseFromFetch.clone();

That’s what gets put in the pages cache:

fetchEvent.waitUntil(
    caches.open('pages')
    .then( pagesCache => {
        return pagesCache.put(request, copy);
    })
)

Now my logic for page requests has an extra piece to it:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, show a custom offline page instead.

Here’s my updated fetch-handling code:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            fetch(request)
            .then( responseFromFetch => {
                const copy = responseFromFetch.clone();
                try {
                    fetchEvent.waitUntil(
                        caches.open('pages')
                        .then( pagesCache => {
                            return pagesCache.put(request, copy);
                        })
                    )
                } catch(error) {
                    console.error(error);
                }
                return responseFromFetch;
            })
            .catch( fetchError => {
                return caches.match('/offline');
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

I call this the cache-as-you-go pattern. The more pages someone views on my site, the more pages they’ll have cached.

Now that there’s an ever-growing cache of previously visited pages, I can update my offline fallback. Currently, I reach straight for the custom offline page:

.catch( fetchError => {
    return caches.match('/offline');
})

But now I can try looking for a cached copy of the requested page first:

.catch( fetchError => {
    caches.match(request)
    .then( responseFromCache => {
        return responseFromCache || caches.match('/offline');
    })
});

Now my offline logic is expanded:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead.

I can also access this ever-growing cache of pages from my custom offline page to show people which pages they can revisit, even if there’s no internet connection.

So far, so good. Everything I’ve outlined so far is a good robust strategy for handling offline situations. Now I’m going to deal with the lie-fi situation, and it’s that cache-as-you-go strategy that sets me up nicely.

Timing out

I want to throw this addition into my logic:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead (but if the request is taking too long, try to show a cached version of the page).

The first thing I’m going to do is rewrite my code a bit. If the fetch event is for a page, I’m going to respond with a promise:

if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
        new Promise( resolveWithResponse => {
            // Code for handling page requests.
        })
    );
}

Promises are kind of weird things to get your head around. They’re tailor-made for doing things asynchronously. You can set up two parameters; a success condition and a failure condition. If the success condition is executed, then we say the promise has resolved. If the failure condition is executed, then the promise rejects.

In my re-written code, I’m calling the success condition resolveWithResponse (and I haven’t bothered with a failure condition, tsk, tsk). I’m going to use resolveWithResponse in my promise everywhere that I used to have a return statement:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            new Promise( resolveWithResponse => {
                fetch(request)
                .then( responseFromFetch => {
                    const copy = responseFromFetch.clone();
                    try {
                        fetchEvent.waitUntil(
                            caches.open('pages')
                            then( pagesCache => {
                                return pagesCache.put(request, copy);
                            })
                        )
                    } catch(error) {
                        console.error(error);
                    }
                    resolveWithResponse(responseFromFetch);
                })
                .catch( fetchError => {
                    caches.match(request)
                    .then( responseFromCache => {
                        resolveWithResponse(
                            responseFromCache || caches.match('/offline')
                        );
                    })
                })
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

By itself, rewriting my code as a promise doesn’t change anything. Everything’s working the same as it did before. But now I can introduce the time-out logic. I’m going to put this inside my promise:

const timer = setTimeout( () => {
    caches.match(request)
    .then( responseFromCache => {
        if (responseFromCache) {
            resolveWithResponse(responseFromCache);
        }
    })
}, 3000);

If a request takes three seconds (3000 milliseconds), then that code will execute. At that point, the promise attempts to resolve with a response from the cache instead of waiting for the network. If there is a cached response, that’s what the user now gets. If there isn’t, then the wait continues for the network.

The last thing left for me to do is cancel the countdown to timing out if a network response does return within three seconds. So I put this in the then clause that’s triggered by a successful network response:

clearTimeout(timer);

I also add the clearTimeout statement to the catch clause that handles offline situations. Here’s the final code:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            new Promise( resolveWithResponse => {
                const timer = setTimeout( () => {
                    caches.match(request)
                    .then( responseFromCache => {
                        if (responseFromCache) {
                            resolveWithResponse(responseFromCache);
                        }
                    })
                }, 3000);
                fetch(request)
                .then( responseFromFetch => {
                    clearTimeout(timer);
                    const copy = responseFromFetch.clone();
                    try {
                        fetchEvent.waitUntil(
                            caches.open('pages')
                            then( pagesCache => {
                                return pagesCache.put(request, copy);
                            })
                        )
                    } catch(error) {
                        console.error(error);
                    }
                    resolveWithResponse(responseFromFetch);
                })
                .catch( fetchError => {
                    clearTimeout(timer);
                    caches.match(request)
                    .then( responseFromCache => {
                        resolveWithResponse(
                            responseFromCache || caches.match('/offline')
                        );
                    })
                })
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request)
        })
    }
});

That’s the JavaScript translation of this logic:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead (but if the request is taking too long, try to show a cached version of the page).

For everything else, try finding a cached version first, otherwise fetch it from the network.

Pros and cons

As with all service worker enhancements to a website, this strategy will do absolutely nothing for first-time visitors. If you’ve never visited my site before, you’ve got nothing cached. But the more you return to the site, the more your cache is primed for speedy retrieval.

I think that serving up a cached copy of a page when the network connection is flaky is a pretty good strategy …most of the time. If we’re talking about a blog post on this site, then sure, there won’t be much that the reader is missing out on—a fixed typo or ten; maybe some additional webmentions at the end of a post. But if we’re talking about the home page, then a reader with a flaky network connection might think there’s nothing new to read when they’re served up a stale version.

What I’d really like is some way to know—on the client side—whether or not the currently-loaded page came from a cache or from a network. Then I could add some kind of interface element that says, “Hey, this page might be stale—click here if you want to check for a fresher version.” I’d also need some way in the service worker to identify any requests originating from that interface element and make sure they always go out to the network.

I think that should be doable somehow. If you can think of a way to do it, please share it. Write a blog post and send me the link.

But even without the option to over-ride the time-out, I’m glad that I’m at least doing something to handle the lie-fi situation. Perhaps I should write a sequel to Going Offline called Still Online But Only In Theory Because The Connection Sucks.

Monday, May 6th, 2019

A Full Life - MIT Technology Review

A cli-fi short story by Paolo Bacigalupi.

Friday, May 3rd, 2019

A Conspiracy To Kill IE6

This is a fascinating story of psychological manipulation and internal politics. It leaves me feeling queasy about the amount of power wielded by individuals in one single organisation.

Create a responsive grid layout with no media queries, using CSS Grid - Andy Bell

CSS grid and custom properties really are a match made in heaven.

Thursday, May 2nd, 2019

Frameworking

There are many reasons to use a JavaScript framework like Vue, Angular, or React. Last year, Nicole asked for some of those reasons. Her question received many, many answers from people pointing out the benefits of using a framework. Interesingly, though, not a single one of those benefits was for end users.

(Mind you, if the framework is being used on the server to pre-render pages, then it’s a moot point—in that situation, it makes no difference to the end user whether you use a framework or not.)

Hidde recently tried using a client-side JavaScript framework for the first time and documented the process:

In the last few months I built my very first framework-based front-end, in Vue.js. I complemented it with a router, a store and a GraphQL library, in order to have, respectively, multiple (virtual) pages, globally shared data and a smart way to load new data in my templates.

It’s a very even-handed write-up. I highly recommend reading it. He describes the pros and cons of using a framework and using vanilla JavaScript:

I am glad I tried a framework and found its features were extremely helpful in creating a consistent interface for my users. My hope is though, that I won’t forget about vanilla. It’s perfectly valid to build a website with no or few dependencies.

Speaking of vanilla JavaScript… the blogging machine that is Chris Ferdinandi also wrote a comparison post recently, asking Why do people choose frameworks over vanilla JS? Again, it’s very even-handed and well worth a read. He readily concedes that if you’re working at scale, a framework is almost certainly a good idea:

If you’re building a large scale application (literally Facebook, Twitter, QuickBooks scale), the performance wins of a framework make the overhead worth it.

Alas, I’ve seen many, many framework-driven sites that are most definitely not that operating at that scale. Trys speaks the honest truth here:

We kid ourselves into thinking we’re building groundbreakingly complex systems that require bleeding-edge tools, but in reality, much of what we build is a way to render two things: a list, and a single item. Here are some users, here is a user. Here are your contacts, here are your messages with that contact. There ain’t much more to it than that.

Just the other day, I saw a new site launch that was mostly a marketing site—the home page weighed over five megabytes, two megabytes of which were taken up with JavaScript, and the whole thing required JavaScript to render text to the screen (I’m not going to link to it because I don’t want to engage in any kind of public shaming and finger-wagging).

I worry that all the perfectly valid (developer experience) reasons for using a framwork are outweighing the more important (user experience) reasons for avoiding shipping your dependencies to end users. Like Alex says:

If your conception of “DX” doesn’t include it, or isn’t subservient to the user experience, rethink.

And yes, I am going to take this opportunity to link once again to Alex’s article The “Developer Experience” Bait-and-Switch. Please read it if you haven’t already. Please re-read it if you have.

Anyway, my main reason for writing this is to point you to thoughtful posts like Hidde’s and Chris’s. I think it’s great to see people thoughtfully weighing up the pros and cons of choosing any particular technology—I’m a bit obsessed with the topic of evaluating technology.

If you’re weighing up the pros and cons of using, say, a particular JavaScript library or framework, that’s wonderful. My worry is that there are people working in front-end development who aren’t putting that level of thought into their technology choices, but are instead using a particular framework because it’s what they’re used to.

To quote Grace Hopper:

The most dangerous phrase in the language is, ‘We’ve always done it this way.’

Tuesday, April 30th, 2019

A poem about Silicon Valley, assembled from Quora questions about Silicon Valley

What makes a startup ecosystem thrive?
What do people plan to do once they’re over 35?
Is an income of $160K enough to survive?
What kind of car does Mark Zuckerberg drive?

Sunday, April 28th, 2019

Norbert Wiener’s Human Use of Human Beings is more relevant than ever.

What would Wiener think of the current human use of human beings? He would be amazed by the power of computers and the internet. He would be happy that the early neural nets in which he played a role have spawned powerful deep-learning systems that exhibit the perceptual ability he demanded of them—although he might not be impressed that one of the most prominent examples of such computerized Gestalt is the ability to recognize photos of kittens on the World Wide Web.

Saturday, April 27th, 2019

Friday, April 26th, 2019

How I failed the <a>

I think the situation that Remy outlines here is quite common (in client-rehydrated server-rendered pages), but what’s less common is Remy’s questioning and iteration.

So I now have a simple rule of thumb: if there’s an onClick, there’s got to be an anchor around the component.

Wednesday, April 24th, 2019

Untold History of AI - IEEE Spectrum

A terrific six-part series of short articles looking at the people behind the history of Artificial Intelligence, from Babbage to Turing to JCR Licklider.

  1. When Charles Babbage Played Chess With the Original Mechanical Turk
  2. Invisible Women Programmed America’s First Electronic Computer
  3. Why Alan Turing Wanted AI Agents to Make Mistakes
  4. The DARPA Dreamer Who Aimed for Cyborg Intelligence
  5. Algorithmic Bias Was Born in the 1980s
  6. How Amazon’s Mechanical Turkers Got Squeezed Inside the Machine

The history of AI is often told as the story of machines getting smarter over time. What’s lost is the human element in the narrative, how intelligent machines are designed, trained, and powered by human minds and bodies.