Tags: frontend

139

sparkline

Am I cached or not?

When I was writing about the lie-fi strategy I’ve added to adactio.com, I finished with this thought:

What I’d really like is some way to know—on the client side—whether or not the currently-loaded page came from a cache or from a network. Then I could add some kind of interface element that says, “Hey, this page might be stale—click here if you want to check for a fresher version.”

Trys heard my plea, and came up with a very clever technique to alter the HTML of a page when it’s put into a cache.

It’s a function that reads the response body stream in, returning a new stream. Whilst reading the stream, it searches for the character codes that make up: <html. If it finds them, it tacks on a data-cached attribute.

Nice!

But then I was discussing this issue with Tantek and Aaron late one night after Indie Web Camp Düsseldorf. I realised that I might have another potential solution that doesn’t involve the service worker at all.

Caveat: this will only work for pages that have some kind of server-side generation. This won’t work for static sites.

In my case, pages are generated by PHP. I’m not doing a database lookup every time you request a page—I’ve got a server-side cache of posts, for example—but there is a little bit of assembly done for every request: get the header from here; get the main content from over there; get the footer; put them all together into a single page and serve that up.

This means I can add a timestamp to the page (using PHP). I can mark the moment that it was served up. Then I can use JavaScript on the client side to compare that timestamp to the current time.

I’ve published the code as a gist.

In a script element on each page, I have this bit of coducken:

var serverTimestamp = <?php echo time(); ?>;

Now the JavaScript variable serverTimestamp holds the timestamp that the page was generated. When the page is put in the cache, this won’t change. This number should be the number of seconds since January 1st, 1970 in the UTC timezone (that’s what my server’s timezone is set to).

Starting with JavaScript’s Date object, I use a caravan of methods like toUTCString() and getTime() to end up with a variable called clientTimestamp. This will give the current number of seconds since January 1st, 1970, regardless of whether the page is coming from the server or from the cache.

var localDate = new Date();
var localUTCString = localDate.toUTCString();
var UTCDate = new Date(localUTCString);
var clientTimestamp = UTCDate.getTime() / 1000;

Then I compare the two and see if there’s a discrepency greater than five minutes:

if (clientTimestamp - serverTimestamp > (60 * 5))

If there is, then I inject some markup into the page, telling the reader that this page might be stale:

document.querySelector('main').insertAdjacentHTML('afterbegin',`
  <p class="feedback">
    <button onclick="this.parentNode.remove()">dismiss</button>
    This page might be out of date. You can try <a href="javascript:window.location=window.location.href">refreshing</a>.
  </p>
`);

The reader has the option to refresh the page or dismiss the message.

This page might be out of date. You can try refreshing.

It’s not foolproof by any means. If the visitor’s computer has their clock set weirdly, then the comparison might return a false positive every time. Still, I thought that using UTC might be a safer bet.

All in all, I think this is a pretty good method for detecting if a page is being served from a cache. Remember, the goal here is not to determine if the user is offline—for that, there’s navigator.onLine.

The upshot is this: if you visit my site with a crappy internet connection (lie-fi), then after three seconds you may be served with a cached version of the page you’re requesting (if you visited that page previously). If that happens, you’ll now also be presented with a little message telling you that the page isn’t fresh. Then it’s up to you whether you want to have another go.

I like the way that this puts control back into the hands of the user.

Toast

Shockwaves rippled across the web standards community recently when it appeared that Google Chrome was unilaterally implementing a new element called toast. It turns out that’s not the case, but the confusion is understandable.

First off, this all kicked off with the announcement of “intent to implement”. That makes it sounds like Google are intending to, well, …implement this. In fact “intent to implement” really means “intend to mess around with this behind a flag”. The language is definitely confusing and this is something that will hopefully be addressed.

Secondly, Chrome isn’t going to ship a toast element. Instead, this is a proposal for a custom element currently called std-toast. I’m assuming that should the experiment prove successful, it’s not a foregone conclusion that the final element name will be called toast (minus the sexually-transmitted-disease prefix). If this turns out to be a useful feature, there will surely be a discussion between implementators about the naming of the finished element.

This is the ideal candidate for a web component. It makes total sense to create a custom element along the lines of std-toast. At first I was confused about why this was happening inside of a browser instead of first being created as a standalone web component, but it turns out that there’s been a fair bit of research looking at existing implementations in libraries and web components. So this actually looks like a good example of paving an existing cowpath.

But it didn’t come across that way. The timing of announcements felt like this was something that was happening without prior discussion. Terence Eden writes:

It feels like a Google-designed, Google-approved, Google-benefiting idea which has been dumped onto the Web without any consideration for others.

I know that isn’t the case. And I know how many dedicated people have worked hard on this proposal.

Adrian Roselli also remarks on the optics of this situation:

To be clear, while I think there is value in minting a native HTML element to fill a defined gap, I am wary of the approach Google has taken. A repo from a new-to-the-industry Googler getting a lot of promotion from Googlers, with Googlers on social media doing damage control for the blowback, WHATWG Googlers handling questions on the repo, and Google AMP strongly supporting it (to reduce its own footprint), all add up to raise alarm bells with those who advocated for a community-driven, needs-based, accessible web.

Dave Cramer made a similar point:

But my concern wasn’t so much about the nature of the new elements, but of how we learned about them and what that says about how web standardization works.

So there’s a general feeling (outside of Google) that there’s something screwy here about the order of events. A lot discussion and research seems to have happened in isolation before announcing the intent to implement:

It does not appear that any discussions happened with other browser vendors or standards bodies before the intent to implement.

Why is this a problem? Google is seeking feedback on a solution, not on how to solve the problem.

Going back to my early confusion about putting a web component directly into a browser, this question on Discourse echoes my initial reaction:

Why not release std-toast (and other elements in development) as libraries first?

It turns out that std-toast and other in-browser web components are part of an idea called layered APIs. In theory this is an initiative in the spirit of the extensible web manifesto.

The extensible web movement focused on exposing low-level APIs to developers: the fetch API, the cache API, custom elements, Houdini, and all of those other building blocks. Layered APIs, on the other hand, focuses on high-level features …like, say, an HTML element for displaying “toast” notifications.

Layered APIs is an interesting idea, but I’m worried that it could be used to circumvent discussion between implementers. It’s a route to unilaterally creating new browser features first and standardising after the fact. I know that’s how many features already end up in browsers, but I think that the sooner that authors, implementers, and standards bodies get a say, the better.

I certainly don’t think this is a good look for Google given the debacle of AMP’s “my way or the highway” rollout. I know that’s a completely different team, but the external perception of Google amongst developers has been damaged by the AMP project’s anti-competitive abuse of Google’s power in search.

Right now, a lot of people are jumpy about Microsoft’s move to Chromium for Edge. My friends at Microsoft have been reassuring me that while it’s always a shame to reduce browser engine diversity, this could actually be a good thing for the standards process: Microsoft could theoretically keep Google in check when it comes to what features are introduced to the Chromium engine.

But that only works if there is some kind of standards process. Layered APIs in general—and std-toast in particular—hint at a future where a single browser vendor can plough ahead on their own. I sincerely hope that’s a misreading of the situation and that this has all been an exercise in miscommunication and misunderstanding.

Like Dave Cramer says:

I hear a lot about how anyone can contribute to the web platform. We’ve all heard the preaching about incubation, the Extensible Web, working in public, paving the cowpaths, and so on. But to an outside observer this feels like Google making all the decisions, in private, and then asking for public comment after the feature has been designed.

Three conference talks

Conference talks are like buses. They take a long time and you constantly ask yourself why you chose to get on board.

I’ll start again.

Conference talks are like buses. You wait for ages and then three come along at once. Or at least, three conference videos have come along at once:

  1. The video of the talk I gave at State Of The Browser called The Web Is Agreement.
  2. The video of the talk I gave at New Adventures called Building.
  3. The video of the talk I gave at Frontend United called Going Offline.

That last one is quite practical. It’s very much in the style of the book I wrote on service workers. If you’d like to see this talk, you should come to An Event Apart in Chicago in August.

The other two are …less practical. They’re kind of pretentious really. That’s kinda my style.

The Web Is Agreement was a one-off talk for State Of The Browser. I like how it turned out, and I’d love to give it again if there were a suitable event.

I will be giving my New Adventures talk again in Vancouver next month at the Design & Content conference. You should come along—it looks like it’s going to be a great event.

I’ve added these latest three conference talk videos to my collection. I’m using Notist to document past talks. It’s a great service! I became a paying customer just over a year ago and it was money well spent. I really like how I’ve been able to set up a custom domain:

speaking.adactio.com

Head’s role

I have a bittersweet feeling today. Danielle is moving on from Clearleft.

I used to get really down when people left. Over time I’ve learned not to take it as such a bad thing. I mean, of course it’s sad when someone moves on, but for them, it’s exciting. And I should be sharing in that excitement, not putting a damper on it.

Besides, people tend to stay at Clearleft for years and years—in the tech world, that’s unheard of. So it’s not really so terrible when they decide to head out to pastures new. They’ll always be Clearlefties. Just look at the lovely parting words from Harry, Paul, Ellen, and Ben:

Working at Clearleft was one of the best decisions I ever made. 6 years of some work that I’m most proud of, amongst some of the finest thinkers I’ve ever met.

(Side note: I’ve been thinking about starting a podcast where I chat to ex-Clearlefties. We could reflect on the past, look to the future, and generally just have a catch-up. Would that be self indulgent or interesting? Let me know what you think.)

So of course I’m going to miss working with Danielle, but as with other former ‘lefties, I’m genuinely excited to see what happens next for her. Clearleft has had an excellent three years of her time and now it’s another company’s turn.

In the spirit of “one door closes, another opens,” Danielle’s departure creates an opportunity for someone else. Fancy working at Clearleft? Well, we’re looking for a head of front-end development.

Do you remember back at the start of the year when we were hiring a front-end developer, and I wrote about writing job postings?

My first instinct was to look at other job ads and take my cue from them. But, let’s face it, most job ads are badly written, and prone to turning into laundry lists. So I decided to just write like I normally would. You know, like a human.

That worked out really well. We ended up hiring the ridiculously talented Trys Mudford. Success!

So I’ve taken the same approach with this job ad. I’ve tried to paint as clear and honest a picture as I can of what this role would entail. Like it says, there are three main parts to the job:

  • business support,
  • technical leadership, and
  • professional development.

Now, I could easily imagine someone reading the job description and thinking, “Nope! Not for me.” Let’s face it: There Will Be Meetings. And a whole lotta context switching:

Within the course of one day, you might go from thinking about thorny code problems to helping someone on your team with their career plans to figuring out how to land new business in a previously uncharted area of technology.

I can equally imagine someone reading that and thinking “Yes! This is what I’ve been waiting for.”

Oh, and in case you’re wondering why I’m not taking this role …well, in the short term, I will for a while, but I’d consider myself qualified for maybe one third to one half of the required tasks. Yes, I can handle the professional development side of things (in fact, I really, really enjoy that). I can handle some of the technical leadership stuff—if we’re talking about HTML, CSS, JavaScript, accessibility, and performance. But all of the back-of-the-front-end stuff—build tools, libraries, toolchains—is beyond me. And I think I’d be rubbish at the business support stuff, mostly because that doesn’t excite me much. But maybe it excites you! If so, you should apply.

I can picture a few scenarios where this role could be the ideal career move…

Suppose you’re a lead developer at a product company. You enjoy leading a team of devs, and you like setting the technical direction when it comes to the tools and techniques being used. But maybe you’re frustrated by always working on the same product with the same tech stack. The agency world, where every project is different, might be exactly what you’re looking for.

Or maybe you’re an accomplished and experienced front-end developer, freelancing and contracting for years. Perhaps you’re less enamoured with being so hands-on with the code all the time. Maybe you’ve realised that what you really enjoy is solving problems and evaluating techologies, and you’d be absolutely fine with having someone else take care of the implementation. Moving into a lead role like this might be the perfect way to make the best use of your time and have more impact with your decisions.

You get the idea. If any of this is sounding intriguing to you, you should definitely apply for the role. What do you have to lose?

Also, as it says in the job ad:

If you’re from a group that is under-represented in tech, please don’t hesitate to get in touch.

Timing out

Service workers are great for creating a good user experience when someone is offline. Heck, the book I wrote about service workers is literally called Going Offline.

But in some ways, the offline experience is relatively easy to handle. It’s a binary situation; either you’re online or you’re offline. What’s more challenging—and probably more common—is the situation that Jake calls Lie-Fi. That’s when technically you’ve got a network connection …but it’s a shitty connection, like one bar of mobile signal. In that situation, because there’s technically a connection, the user gets a slow frustrating experience. Whatever code you’ve got in your service worker for handling offline situations will never get triggered. When you’re handling fetch events inside a service worker, there’s no automatic time-out.

But you can make one.

That’s what I’ve done recently here on adactio.com. Before showing you what I added to my service worker script to make that happen, let me walk you through my existing strategy for handling offline situations.

Service worker strategies

Alright, so in my service worker script, I’ve got a block of code for handling requests from fetch events:

addEventListener('fetch', fetchEvent => {
        const request = fetchEvent.request;
    // Do something with this request.
});

I’ve got two strategies in my code. One is for dealing with requests for pages:

if (request.headers.get('Accept').includes('text/html')) {
    // Code for handling page requests.
}

By adding an else clause I can have a different strategy for dealing with requests for anything else—images, style sheets, scripts, and so on:

if (request.headers.get('Accept').includes('text/html')) {
    // Code for handling page requests.
} else {
    // Code for handling everthing else.
}

For page requests, I’m going to try to go the network first:

fetchEvent.respondWith(
    fetch(request)
    .then( responseFromFetch => {
        return responseFromFetch;
    })

My logic is:

When someone requests a page, try to fetch it from the network.

If that doesn’t work, we’re in an offline situation. That triggers the catch clause. That’s where I have my offline strategy: show a custom offline page that I’ve previously cached (during the install event):

.catch( fetchError => {
    return caches.match('/offline');
})

Now my logic has been expanded to this:

When someone requests a page, try to fetch it from the network, but if that doesn’t work, show a custom offline page instead.

So my overall code for dealing with requests for pages looks like this:

if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
        fetch(request)
        .then( responseFromFetch => {
            return responseFromFetch;
        })
        .catch( fetchError => {
            return caches.match('/offline');
        })
    );
}

Now I can fill in the else statement that handles everything else—images, style sheets, scripts, and so on. Here my strategy is different. I’m looking in my caches first, and I only fetch the file from network if the file can’t be found in any cache:

caches.match(request)
.then( responseFromCache => {
    return responseFromCache || fetch(request);
})

Here’s all that fetch-handling code put together:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            fetch(request)
            .then( responseFromFetch => {
                return responseFromFetch;
            })
            .catch( fetchError => {
                return caches.match('/offline');
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

Good.

Cache as you go

Now I want to introduce an extra step in the part of the code where I deal with requests for pages. Whenever I fetch a page from the network, I’m going to take the opportunity to squirrel it away in a cache. I’m calling that cache “pages”. I’m imaginative like that.

fetchEvent.respondWith(
    fetch(request)
    .then( responseFromFetch => {
        const copy = responseFromFetch.clone();
        try {
            fetchEvent.waitUntil(
                caches.open('pages')
                .then( pagesCache => {
                    return pagesCache.put(request, copy);
                })
            )
        } catch(error) {
            console.error(error);
        }
        return responseFromFetch;
    })

You’ll notice that I can’t put the response itself (responseFromCache) into the cache. That’s a stream that I only get to use once. Instead I need to make a copy:

const copy = responseFromFetch.clone();

That’s what gets put in the pages cache:

fetchEvent.waitUntil(
    caches.open('pages')
    .then( pagesCache => {
        return pagesCache.put(request, copy);
    })
)

Now my logic for page requests has an extra piece to it:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, show a custom offline page instead.

Here’s my updated fetch-handling code:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            fetch(request)
            .then( responseFromFetch => {
                const copy = responseFromFetch.clone();
                try {
                    fetchEvent.waitUntil(
                        caches.open('pages')
                        .then( pagesCache => {
                            return pagesCache.put(request, copy);
                        })
                    )
                } catch(error) {
                    console.error(error);
                }
                return responseFromFetch;
            })
            .catch( fetchError => {
                return caches.match('/offline');
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

I call this the cache-as-you-go pattern. The more pages someone views on my site, the more pages they’ll have cached.

Now that there’s an ever-growing cache of previously visited pages, I can update my offline fallback. Currently, I reach straight for the custom offline page:

.catch( fetchError => {
    return caches.match('/offline');
})

But now I can try looking for a cached copy of the requested page first:

.catch( fetchError => {
    caches.match(request)
    .then( responseFromCache => {
        return responseFromCache || caches.match('/offline');
    })
});

Now my offline logic is expanded:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead.

I can also access this ever-growing cache of pages from my custom offline page to show people which pages they can revisit, even if there’s no internet connection.

So far, so good. Everything I’ve outlined so far is a good robust strategy for handling offline situations. Now I’m going to deal with the lie-fi situation, and it’s that cache-as-you-go strategy that sets me up nicely.

Timing out

I want to throw this addition into my logic:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead (but if the request is taking too long, try to show a cached version of the page).

The first thing I’m going to do is rewrite my code a bit. If the fetch event is for a page, I’m going to respond with a promise:

if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
        new Promise( resolveWithResponse => {
            // Code for handling page requests.
        })
    );
}

Promises are kind of weird things to get your head around. They’re tailor-made for doing things asynchronously. You can set up two parameters; a success condition and a failure condition. If the success condition is executed, then we say the promise has resolved. If the failure condition is executed, then the promise rejects.

In my re-written code, I’m calling the success condition resolveWithResponse (and I haven’t bothered with a failure condition, tsk, tsk). I’m going to use resolveWithResponse in my promise everywhere that I used to have a return statement:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            new Promise( resolveWithResponse => {
                fetch(request)
                .then( responseFromFetch => {
                    const copy = responseFromFetch.clone();
                    try {
                        fetchEvent.waitUntil(
                            caches.open('pages')
                            then( pagesCache => {
                                return pagesCache.put(request, copy);
                            })
                        )
                    } catch(error) {
                        console.error(error);
                    }
                    resolveWithResponse(responseFromFetch);
                })
                .catch( fetchError => {
                    caches.match(request)
                    .then( responseFromCache => {
                        resolveWithResponse(
                            responseFromCache || caches.match('/offline')
                        );
                    })
                })
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

By itself, rewriting my code as a promise doesn’t change anything. Everything’s working the same as it did before. But now I can introduce the time-out logic. I’m going to put this inside my promise:

const timer = setTimeout( () => {
    caches.match(request)
    .then( responseFromCache => {
        if (responseFromCache) {
            resolveWithResponse(responseFromCache);
        }
    })
}, 3000);

If a request takes three seconds (3000 milliseconds), then that code will execute. At that point, the promise attempts to resolve with a response from the cache instead of waiting for the network. If there is a cached response, that’s what the user now gets. If there isn’t, then the wait continues for the network.

The last thing left for me to do is cancel the countdown to timing out if a network response does return within three seconds. So I put this in the then clause that’s triggered by a successful network response:

clearTimeout(timer);

I also add the clearTimeout statement to the catch clause that handles offline situations. Here’s the final code:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            new Promise( resolveWithResponse => {
                const timer = setTimeout( () => {
                    caches.match(request)
                    .then( responseFromCache => {
                        if (responseFromCache) {
                            resolveWithResponse(responseFromCache);
                        }
                    })
                }, 3000);
                fetch(request)
                .then( responseFromFetch => {
                    clearTimeout(timer);
                    const copy = responseFromFetch.clone();
                    try {
                        fetchEvent.waitUntil(
                            caches.open('pages')
                            then( pagesCache => {
                                return pagesCache.put(request, copy);
                            })
                        )
                    } catch(error) {
                        console.error(error);
                    }
                    resolveWithResponse(responseFromFetch);
                })
                .catch( fetchError => {
                    clearTimeout(timer);
                    caches.match(request)
                    .then( responseFromCache => {
                        resolveWithResponse(
                            responseFromCache || caches.match('/offline')
                        );
                    })
                })
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request)
        })
    }
});

That’s the JavaScript translation of this logic:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead (but if the request is taking too long, try to show a cached version of the page).

For everything else, try finding a cached version first, otherwise fetch it from the network.

Pros and cons

As with all service worker enhancements to a website, this strategy will do absolutely nothing for first-time visitors. If you’ve never visited my site before, you’ve got nothing cached. But the more you return to the site, the more your cache is primed for speedy retrieval.

I think that serving up a cached copy of a page when the network connection is flaky is a pretty good strategy …most of the time. If we’re talking about a blog post on this site, then sure, there won’t be much that the reader is missing out on—a fixed typo or ten; maybe some additional webmentions at the end of a post. But if we’re talking about the home page, then a reader with a flaky network connection might think there’s nothing new to read when they’re served up a stale version.

What I’d really like is some way to know—on the client side—whether or not the currently-loaded page came from a cache or from a network. Then I could add some kind of interface element that says, “Hey, this page might be stale—click here if you want to check for a fresher version.” I’d also need some way in the service worker to identify any requests originating from that interface element and make sure they always go out to the network.

I think that should be doable somehow. If you can think of a way to do it, please share it. Write a blog post and send me the link.

But even without the option to over-ride the time-out, I’m glad that I’m at least doing something to handle the lie-fi situation. Perhaps I should write a sequel to Going Offline called Still Online But Only In Theory Because The Connection Sucks.

Frameworking

There are many reasons to use a JavaScript framework like Vue, Angular, or React. Last year, Nicole asked for some of those reasons. Her question received many, many answers from people pointing out the benefits of using a framework. Interesingly, though, not a single one of those benefits was for end users.

(Mind you, if the framework is being used on the server to pre-render pages, then it’s a moot point—in that situation, it makes no difference to the end user whether you use a framework or not.)

Hidde recently tried using a client-side JavaScript framework for the first time and documented the process:

In the last few months I built my very first framework-based front-end, in Vue.js. I complemented it with a router, a store and a GraphQL library, in order to have, respectively, multiple (virtual) pages, globally shared data and a smart way to load new data in my templates.

It’s a very even-handed write-up. I highly recommend reading it. He describes the pros and cons of using a framework and using vanilla JavaScript:

I am glad I tried a framework and found its features were extremely helpful in creating a consistent interface for my users. My hope is though, that I won’t forget about vanilla. It’s perfectly valid to build a website with no or few dependencies.

Speaking of vanilla JavaScript… the blogging machine that is Chris Ferdinandi also wrote a comparison post recently, asking Why do people choose frameworks over vanilla JS? Again, it’s very even-handed and well worth a read. He readily concedes that if you’re working at scale, a framework is almost certainly a good idea:

If you’re building a large scale application (literally Facebook, Twitter, QuickBooks scale), the performance wins of a framework make the overhead worth it.

Alas, I’ve seen many, many framework-driven sites that are most definitely not that operating at that scale. Trys speaks the honest truth here:

We kid ourselves into thinking we’re building groundbreakingly complex systems that require bleeding-edge tools, but in reality, much of what we build is a way to render two things: a list, and a single item. Here are some users, here is a user. Here are your contacts, here are your messages with that contact. There ain’t much more to it than that.

Just the other day, I saw a new site launch that was mostly a marketing site—the home page weighed over five megabytes, two megabytes of which were taken up with JavaScript, and the whole thing required JavaScript to render text to the screen (I’m not going to link to it because I don’t want to engage in any kind of public shaming and finger-wagging).

I worry that all the perfectly valid (developer experience) reasons for using a framwork are outweighing the more important (user experience) reasons for avoiding shipping your dependencies to end users. Like Alex says:

If your conception of “DX” doesn’t include it, or isn’t subservient to the user experience, rethink.

And yes, I am going to take this opportunity to link once again to Alex’s article The “Developer Experience” Bait-and-Switch. Please read it if you haven’t already. Please re-read it if you have.

Anyway, my main reason for writing this is to point you to thoughtful posts like Hidde’s and Chris’s. I think it’s great to see people thoughtfully weighing up the pros and cons of choosing any particular technology—I’m a bit obsessed with the topic of evaluating technology.

If you’re weighing up the pros and cons of using, say, a particular JavaScript library or framework, that’s wonderful. My worry is that there are people working in front-end development who aren’t putting that level of thought into their technology choices, but are instead using a particular framework because it’s what they’re used to.

To quote Grace Hopper:

The most dangerous phrase in the language is, ‘We’ve always done it this way.’

Inlining SVG background images in CSS with custom properties

Here’s a tiny lesson that I picked up from Trys that I’d like to share with you…

I was working on some upcoming changes to the Clearleft site recently. One particular component needed some SVG background images. I decided I’d inline the SVGs in the CSS to avoid extra network requests. It’s pretty straightforward:

.myComponent {
    background-image: url('data:image/svg+xml;utf8,<svg> ... </svg>');
}

You can basically paste your SVG in there, although you need to a little bit of URL encoding: I found that converting # to %23 to was enough for my needs.

But here’s the thing. My component had some variations. One of the variations had multiple background images. There was a second background image in addition to the first. There’s no way in CSS to add an additional background image without writing a whole background-image declaration:

.myComponent--variant {
    background-image: url('data:image/svg+xml;utf8,<svg> ... </svg>'), url('data:image/svg+xml;utf8,<svg> ... </svg>');
}

So now I’ve got the same SVG source inlined in two places. That negates any performance benefits I was getting from inlining in the first place.

That’s where Trys comes in. He shared a nifty technique he uses in this exact situation: put the SVG source into a custom property!

:root {
    --firstSVG: url('data:image/svg+xml;utf8,<svg> ... </svg>');
    --secondSVG: url('data:image/svg+xml;utf8,<svg> ... </svg>');
}

Then you can reference those in your background-image declarations:

.myComponent {
    background-image: var(--firstSVG);
}
.myComponent--variant {
    background-image: var(--firstSVG), var(--secondSVG);
}

Brilliant! Not only does this remove any duplication of the SVG source, it also makes your CSS nice and readable: no more big blobs of SVG source code in the middle of your style sheet.

You might be wondering what will happen in older browsers that don’t support CSS custom properties (that would be Internet Explorer 11). Those browsers won’t get any background image. Which is fine. It’s a background image. Therefore it’s decoration. If it were an important image, it wouldn’t be in the background.

Progressive enhancement, innit?

Split

When I talk about evaluating technology for front-end development, I like to draw a distinction between two categories of technology.

On the one hand, you’ve got the raw materials of the web: HTML, CSS, and JavaScript. This is what users will ultimately interact with.

On the other hand, you’ve got all the tools and technologies that help you produce the HTML, CSS, and JavaScript: pre-processors, post-processors, transpilers, bundlers, and other build tools.

Personally, I’m much more interested and excited by the materials than I am by the tools. But I think it’s right and proper that other developers are excited by the tools. A good balance of both is probably the healthiest mix.

I’m never sure what to call these two categories. Maybe the materials are the “external” technologies, because they’re what users will interact with. Whereas all the other technologies—that mosty live on a developer’s machine—are the “internal” technologies.

Another nice phrase is something I heard during Chris’s talk at An Event Apart in Seattle, when he quoted Brad, who talked about the front of the front end and the back of the front end.

I’m definitely more of a front-of-the-front-end kind of developer. I have opinions on the quality of the materials that get served up to users; the output should be accessible and performant. But I don’t particularly care about the tools that produced those materials on the back of the front end. Use whatever works for you (or whatever works for your team).

As a user-centred developer, my priority is doing what’s best for end users. That’s not to say I don’t value developer convenience. I do. But I prioritise user needs over developer needs. And in any case, those two needs don’t even come into conflict most of the time. Like I said, from a user’s point of view, it’s irrelevant what text editor or version control system you use.

Now, you could make the argument that anything that is good for developer convenience is automatically good for user experience because faster, more efficient development should result in better output. While that’s true in theory, I highly recommend Alex’s post, The “Developer Experience” Bait-and-Switch.

Where it gets interesting is when a technology that’s designed for developer convenience is made out of the very materials being delivered to users. For example, a CSS framework like Bootstrap is made of CSS. That’s different to a tool like Sass which outputs CSS. Whether or not a developer chooses to use Sass is irrelevant to the user—the final output will be CSS either way. But if a developer chooses to use a CSS framework, that decision has a direct impact on the user experience. The user must download the framework in order for the developer to get the benefit.

So whereas Sass sits at the back of the front end—where I don’t care what you use—Bootstrap sits at the front of the front end. For tools like that, I don’t think saying “use whatever works for you” is good enough. It’s got to be weighed against the cost to the user.

Historically, it’s been a similar story with JavaScript libraries. They’re written in JavaScript, and so they’re going to be executed in the browser. If a developer wanted to use jQuery to make their life easier, the user paid the price in downloading the jQuery library.

But I’ve noticed a welcome change with some of the bigger JavaScript frameworks. Whereas the initial messaging around frameworks like React touted the benefits of state management and the virtual DOM, I feel like that’s not as prevalent now. You’re much more likely to hear people—quite rightly—talk about the benefits of modularity and componentisation. If you combine that with the rise of Node—which means that JavaScript is no longer confined to the browser—then these frameworks can move from the front of the front end to the back of the front end.

We’ve certainly seen that at Clearleft. We’ve worked on multiple React projects, but in every case, the output was server-rendered. Developers get the benefit of working with a tool that helps them. Users don’t pay the price.

For me, this question of whether a framework will be used on the client side or the server side is crucial.

Let me tell you about a Clearleft project that sticks in my mind. We were working with a big international client on a product that was going to be rolled out to students and teachers in developing countries. This was right up my alley! We did plenty of research into network conditions and typical device usage. That then informed a tight performance budget. Every design decision—from web fonts to images—was informed by that performance budget. We were producing lean, mean markup, CSS, and JavaScript. But we weren’t the ones implementing the final site. That was being done by the client’s offshore software team, and they insisted on using React. “That’s okay”, I thought. “React can be used server-side so we can still output just what’s needed, right?” Alas, no. These developers did everything client side. When the final site launched, the log-in screen alone required megabytes of JavaScript just to render a form. It was, in my opinion, entirely unfit for purpose. It still pains me when I think about it.

That was a few years ago. I think that these days it has become a lot easier to make the decision to use a framework on the back of the front end. Like I said, that’s certainly been the case on recent Clearleft projects that involved React or Vue.

It surprises me, then, when I see the question of server rendering or client rendering treated almost like an implementation detail. It might be an implementation detail from a developer’s perspective, but it’s a key decision for the user experience. The performance cost of putting your entire tech stack into the browser can be enormous.

Alex Sanders from the development team at The Guardian published a post recently called Revisiting the rendering tier . In it, he describes how they’re moving to React. Now, if this were a move to client-rendered React, that would make a big impact on the user experience. The thing is, I couldn’t tell from the article whether React was going to be used in the browser or on the server. The article talks about “rendering”—which is something that browsers do—and “the DOM”—which is something that only exists in browsers.

So I asked. It turns out that this plan is very much about generating HTML and CSS on the server before sending it to the browser. Excellent!

With that question answered, I’m cool with whatever they choose to use. In this case, they’re choosing to use CSS-in-JS (although, to be pedantic, there’s no C anymore so technically it’s SS-in-JS). As long as the “JS” part is JavaScript on a server, then it makes no difference to the end user, and therefore no difference to me. Not my circus, not my monkeys. For users, the end result is the same whether styling is applied via a selector in an external stylesheet or, for example, via an inline style declaration (and in some situations, a server-rendered CSS-in-JS solution might be better for performance). And so, as a user-centred developer, this is something that I don’t need to care about.

Except…

I have misgivings. But just to be clear, these misgivings have nothing to do with users. My misgivings are entirely to do with another group of people: the people who make websites.

There’s a second-order effect. By making React—or even JavaScript in general—a requirement for styling something on a web page, the barrier to entry is raised.

At least, I think that the barrier to entry is raised. I completely acknowledge that this is a subjective judgement. In fact, the reason why a team might decide to make JavaScript a requirement for participation might well be because they believe it makes it easier for people to participate. Let me explain…

It wasn’t that long ago that devs coming from a Computer Science background were deriding CSS for its simplicity, complaining that “it’s broken” and turning their noses up at it. That rhetoric, thankfully, is waning. Nowadays they’re far more likely to acknowledge that CSS might be simple, but it isn’t easy. Concepts like the cascade and specificity are real head-scratchers, and any prior knowledge from imperative programming languages won’t help you in this declarative world—all your hard-won experience and know-how isn’t fungible. Instead, it seems as though all this cascading and specificity is butchering the modularity of your nicely isolated components.

It’s no surprise that programmers with this kind of background would treat CSS as damage and find ways to route around it. The many flavours of CSS-in-JS are testament to this. From a programmer’s point of view, this solution has made things easier. Best of all, as long as it’s being done on the server, there’s no penalty for end users. But now the price is paid in the diversity of your team. In order to participate, a Computer Science programming mindset is now pretty much a requirement. For someone coming from a more declarative background—with really good HTML and CSS skills—everything suddenly seems needlessly complex. And as Tantek observed:

Complexity reinforces privilege.

The result is a form of gatekeeping. I don’t think it’s intentional. I don’t think it’s malicious. It’s being done with the best of intentions, in pursuit of efficiency and productivity. But these code decisions are reflected in hiring practices that exclude people with different but equally valuable skills and perspectives.

Rachel describes HTML, CSS and our vanishing industry entry points:

If we make it so that you have to understand programming to even start, then we take something open and enabling, and place it back in the hands of those who are already privileged.

I think there’s a comparison here with toxic masculinity. Toxic masculinity is obviously terrible for women, but it’s also really shitty for men in the way it stigmatises any male behaviour that doesn’t fit its worldview. Likewise, if the only people your team is interested in hiring are traditional programmers, then those programmers are going to resent having to spend their time dealing with semantic markup, accessibility, styling, and other disciplines that they never trained in. Heydon correctly identifies this as reluctant gatekeeping:

By assuming the role of the Full Stack Developer (which is, in practice, a computer scientist who also writes HTML and CSS), one takes responsibility for all the code, in spite of its radical variance in syntax and purpose, and becomes the gatekeeper of at least some kinds of code one simply doesn’t care about writing well.

This hurts everyone. It’s bad for your team. It’s even worse for the wider development community.

Last year, I was asked “Is there a fear or professional challenge that keeps you up at night?” I responded:

My greatest fear for the web is that it becomes the domain of an elite priesthood of developers. I firmly believe that, as Tim Berners-Lee put it, “this is for everyone.” And I don’t just mean it’s for everyone to use—I believe it’s for everyone to make as well. That’s why I get very worried by anything that raises the barrier to entry to web design and web development.

I’ve described a number of dichotomies here:

  • Materials vs. tools,
  • Front of the front end vs. back of the front end,
  • User experience vs. developer experience,
  • Client-side rendering vs. server-side rendering,
  • Declarative languages vs. imperative languages.

But the split that worries the most is this:

  • The people who make the web vs. the people who are excluded from making the web.

Drag’n’drop revisited

I got a message from a screen-reader user of The Session recently, letting me know of a problem they were having. I love getting any kind of feedback around accessibility, so this was like gold dust to me.

They pointed out that the drag’n’drop interface for rearranging the order of tunes in a set was inaccessible.

Drag and drop

Of course! I slapped my forehead. How could I have missed this?

It had been a while since I had implemented that functionality, so before even looking at the existing code, I started to think about how I could improve the situation. Maybe I could capture keystroke events from the arrow keys and announce changes via ARIA values? That sounded a bit heavy-handed though: mess with people’s native keyboard functionality at your peril.

Then I looked at the code. That was when I realised that the fix was going to be much, much easier than I thought.

I documented my process of adding the drag’n’drop functionality back in 2016. Past me had his progressive enhancement hat on:

One of the interfaces needed for this feature was a form to re-order items in a list. So I thought to myself, “what’s the simplest technology to enable this functionality?” I came up with a series of select elements within a form.

Reordering

The problem was in my feature detection:

There’s a little bit of mustard-cutting going on: does the dragula object exist, and does the browser understand querySelector? If so, the select elements are hidden and the drag’n’drop is enabled.

The logic was fine, but the execution was flawed. I was being lazy and hiding the select elements with display: none. That hides them visually, but it also hides them from screen readers. I swapped out that style declaration for one that visually hides the elements, but keeps them accessible and focusable.

It was a very quick fix. I had the odd sensation of wanting to thank Past Me for making things easy for Present Me. But I don’t want to talk about time travel because if we start talking about it then we’re going to be here all day talking about it, making diagrams with straws.

I pushed the fix, told the screen-reader user who originally contacted me, and got a reply back saying that everything was working great now. Success!

Dev perception

Chris put together a terrific round-up of posts recently called Simple & Boring. It links off to a number of great articles on the topic of complexity (and simplicity) in web development.

I had linked to quite a few of the articles myself already, but one I hadn’t seen was from David DeSandro who wrote New tech gets chatter:

You don’t hear about TextMate because TextMate is old. What would I tweet? Still using TextMate. Still good.

I think that’s a very good point.

It’s relatively easy to write and speak about new technologies. You’re excited about them, and there’s probably an eager audience who can learn from what you have to say.

It’s trickier to write something insightful about a tried and trusted (perhaps even boring) technology that’s been around for a while. You could maybe write little tips and tricks, but I bet your inner critic would tell you that nobody’s interested in hearing about that old tech. It’s boring.

The result is that what’s being written about is not a reflection of what’s being widely used. And that’s okay …as long as you know that’s the case. But I worry that theres’s a perception problem. Because of the outsize weighting of new and exciting technologies, a typical developer could feel that their skills are out of date and the technologies they’re using are passé …even if those technologies are actually in wide use.

I don’t know about you, but I constantly feel like I’m behind the curve because I’m not currently using TypeScript or GraphQL or React. Those are all interesting technologies, to be sure, but the time to pick any of them up is when they solve a specific problem I’m having. Learning a new technology just to mitigate a fear of missing out isn’t a scalable strategy. It’s reasonable to investigate a technology because you genuinely think it’s exciting; it’s quite another matter to feel like you must investigate a technology in order to survive. That way lies burn-out.

I find it very grounding to talk to Drew and Rachel about the people using their Perch CMS product. These are working developers, but they are far removed from the world of tools and frameworks forged in the startup world.

In a recent (excellent) article comparing the performance of Formula One websites, Jake made this observation at the end:

However, none of the teams used any of the big modern frameworks. They’re mostly Wordpress & Drupal, with a lot of jQuery. It makes me feel like I’ve been in a bubble in terms of the technologies that make up the bulk of the web.

I think this is very astute. I also think it’s completely understandable to form ideas about what matters to developers by looking at what’s being discussed on Twitter, what’s being starred on Github, what’s being spoken about at conferences, and what’s being written about on Ev’s blog. But it worries me when I see browser devrel teams focusing their efforts on what appears to be the needs of typical developers based on the amount of ink spilled and breath expelled.

I have a suspicion that there’s a silent majority of developers who are working with “boring” technologies on “boring” products in “boring” industries …you know, healthcare, government, education, and other facets of everyday life that any other industry would value more highly than Uber for dogs.

Trys wrote a great blog post called City life, where he compares his experience of doing CMS-driven agency work with his experience working at a startup in Shoreditch:

I was chatting to one of the team about my previous role. “I built two websites a month in WordPress”.

They laughed… “WordPress! Who uses that anymore?!”

Nearly a third of the web as it turns out - but maybe not on the Silicon Roundabout.

I’m not necessarily suggesting that there should be more articles and talks about older, more established technologies. Conferences in particular are supposed to give audiences a taste of what’s coming—they can be a great way of quickly finding out what’s exciting in the world of development. But we shouldn’t feel bad if those topics don’t match our day-to-day reality.

Ultimately what matters is building something—a website, a web app, whatever—that best serves end users. If that requires a new and exciting technology, that’s great. But if it requires an old and boring technology, that’s also great. What matters here is appropriateness.

When we’re evaluating technologies for appropriateness, I hope that we will do so through the lens of what’s best for users, not what we feel compelled to use based on a gnawing sense of irrelevancy driven by the perceived popularity of newer technologies.

CSS custom properties in generated content

Cassie posted a neat tiny lesson that she’s written a reduced test case for.

Here’s the situation…

CSS custom properties are fantastic. You can drop them in just about anywhere that a property takes a value.

Here’s an example of defining a custom property for a length:

:root {
    --my-value: 1em;
}

Then I can use that anywhere I’d normally give something a length:

.my-element {
    margin-bottom: var(--my-value);
}

I went a bit overboard with custom properties on the new Patterns Day site. I used them for colour values, font stacks, and spacing. Design tokens, I guess. They really come into their own when you combine them with media queries: you can update the values of the custom properties based on screen size …without having to redefine where those properties are applied. Also, they can be updated via JavaScript so they make for a great common language between CSS and JavaScript: you can define where they’re used in your CSS and then update their values in JavaScript, perhaps in response to user interaction.

But there are a few places where you can’t use custom properties. You can’t, for example, use them as part of a media query. This won’t work:

@media all and (min-width: var(--my-value)) {
    ...
}

You also can’t use them in generated content if the value is a number. This won’t work:

:root {
    --number-value: 15;
}
.my-element::before {
    content: var(--number-value);
}

Fair enough. Generated content in CSS is kind of a strange beast. Eric delivered an entire hour-long talk at An Event Apart in Seattle on generated content.

But Cassie found a workaround if the value you want to put into that content property is numeric. The CSS counter value is a kind of generated content—the numbers that appear in front of ordered list items. And you can control the value of those numbers from CSS.

CSS counters work kind of like variables. You name them and assign values to them using the counter-reset property:

.my-element {
    counter-reset: mycounter 15;
}

You can then reference the value of mycounter in a content property using the counter value:

.my-element {
    content: counter(mycounter);
}

Cassie realised that even though you can’t pass in a custom property directly to generated content, you can pass in a custom property to the counter-reset property. So you can do this:

:root {
    --number-value: 15;
}
.my-element {
    counter-reset: mycounter var(--number-value);
    content: counter(mycounter);
}

In a roundabout way, this allows you to use a custom property for generated content!

I realise that the use cases are pretty narrow, but I can’t help but be impressed with the thinking behind this. Personally, I would’ve just read that generated content doesn’t accept custom properties and moved on. I would’ve given up quickly. But Cassie took a step back and found a creative pass-the-parcel solution to the problem.

I feel like this is a hack in the best sense of the word: a creatively improvised solution to a problem or limitation.

I was trying to display the numeric value stored in a CSS variable inside generated content… Turns out you can’t do that. But you can do this… codepen.io/cassie-codes/p… (not saying you should, but you could)

Handing back control

An Event Apart Seattle was most excellent. This year, the AEA team are trying something different and making each event three days long. That’s a lot of mindblowing content!

What always fascinates me at events like these is the way that some themes seem to emerge, without any prior collusion between the speakers. This time, I felt that there was a strong thread of giving control directly to users:

Sarah and Margot both touched on this when talking about authenticity in brand messaging.

Margot described this in terms of vulnerability for the brand, but the kind of vulnerability that leads to trust.

Sarah talked about it in terms of respect—respecting the privacy of users, and respecting the way that they want to use your services. Call it compassion, call it empathy, or call it just good business sense, but providing these kind of controls in an interface is an excellent long-term strategy.

In Val’s animation talk, she did a deep dive into prefers-reduced-motion, a media query that deliberately hands control back to the user.

Even in a CSS-heavy talk like Jen’s, she took the time to explain why starting with meaningful markup is so important—it’s because you can’t control how the user will access your content. They may use tools like reader modes, or Pocket, or have web pages read aloud to them. The user has the final say, and rightly so.

In his CSS talk, Eric reminded us that a style sheet is a list of strong suggestions, not instructions.

Beth’s talk was probably the most explicit on the theme of returning control to users. She drew on examples from beyond the world of the web—from architecture, urban planning, and more—to show that the most successful systems are not imposed from the top down, but involve everyone, especially those most marginalised.

And even in my own talk on service workers, I raved about the design pattern of allowing users to save pages offline to read later. Instead of trying to guess what the user wants, give them the means to take control.

I was really encouraged to see this theme emerge. Mind you, when I look at the reality of most web products, it’s easy to get discouraged. Far from providing their users with controls over their own content, Instagram won’t even let their customers have a chronological feed. And Matt recently wrote about how both Twitter and Quora are heading further and further away from giving control to their users in his piece called Optimizing for outrage.

Still, I came away from An Event Apart Seattle with a renewed determination to do my part in giving people more control over the products and services we design and develop.

I spent the first two days of the conference trying to liveblog as much as I could. I find it really focuses my attention, although it’s also quite knackering. I didn’t do too badly; I managed to write cover eleven of the talks (out of the conference’s total of seventeen):

  1. Slow Design for an Anxious World by Jeffrey Zeldman
  2. Designing for Trust in an Uncertain World by Margot Bloomstein
  3. Designing for Personalities by Sarah Parmenter
  4. Generation Style by Eric Meyer
  5. Making Things Better: Redefining the Technical Possibilities of CSS by Rachel Andrew
  6. Designing Intrinsic Layouts by Jen Simmons
  7. How to Think Like a Front-End Developer by Chris Coyier
  8. From Ideation to Iteration: Design Thinking for Work and for Life by Una Kravets
  9. Move Fast and Don’t Break Things by Scott Jehl
  10. Mobile Planet by Luke Wroblewski
  11. Unsolved Problems by Beth Dean

Going Offline—the talk of the book

I gave a new talk at An Event Apart in Seattle yesterday morning. The talk was called Going Offline, which the eagle-eyed amongst you will recognise as the title of my most recent book, all about service workers.

I was quite nervous about this talk. It’s very different from my usual fare. Usually I have some big sweeping arc of history, and lots of pretentious ideas joined together into some kind of narrative arc. But this talk needed to be more straightforward and practical. I wasn’t sure how well I would manage that brief.

I knew from pretty early on that I was going to show—and explain—some code examples. Those were the parts I sweated over the most. I knew I’d be presenting to a mixed audience of designers, developers, and other web professionals. I couldn’t assume too much existing knowledge. At the same time, I didn’t want to teach anyone to such eggs.

In the end, there was an overarching meta-theme to talk, which was this: logic is more important than code. In other words, figuring out what you’re trying to accomplish (and describing it clearly) is more important than typing curly braces and semi-colons. Programming is an act of translation. Before you can translate something, you need to be able to articulate it clearly in your own language first. By emphasising that point, I hoped to make the code less overwhelming to people unfamilar with it.

I had tested the talk with some of my Clearleft colleagues, and they gave me great feedback. But I never know until I’ve actually given a talk in front of a real conference audience whether the talk is any good or not. Now that I’ve given the talk, and received more feedback, I think I can confidentally say that it’s pretty damn good.

My goal was to explain some fairly gnarly concepts—let’s face it: service workers are downright weird, and not the easiest thing to get your head around—and to leave the audience with two feelings:

  1. This is exciting, and
  2. This is something I can do today.

I deliberately left time for questions, bribing people with free copies of my book. I got some great questions, and I may incorporate some of them into future versions of this talk (conference organisers, if this sounds like the kind of talk you’d like at your event, please get in touch). Some of the points brought up in the questions were:

  • Is there some kind of wizard for creating a typical service worker script for any site? I didn’t have a direct answer to this, but I have attempted to make a minimal viable service worker that could be used for just about any site. Mostly I encouraged the questioner to roll their sleeves up and try writing a bespoke script. I also mentioned the Workbox library, but I gave my opinion that if you’re going to spend the time to learn the library, you may as well spend the time to learn the underlying language.
  • What are some state-of-the-art progressive web apps for offline user experiences? Ooh, this one kind of stumped me. I mean, the obvious poster children for progressive webs apps are things like Twitter, Instagram, and Pinterest. They’re all great but the offline experience is somewhat limited. To be honest, I think there’s more potential for great offline experiences by publishers. I especially love the pattern on personal sites like Una’s and Sara’s where people can choose to save articles offline to read later—like a bespoke Instapaper or Pocket. I’d love so see that pattern adopted by some big publications. I particularly like that gives so much more control directly to the end user. Instead of trying to guess what kind of offline experience they want, we give them the tools to craft their own.
  • Do caches get cleaned up automatically? Great question! And the answer is mostly no—although browsers do have their own heuristics about how much space you get to play with. There’s a whole chapter in my book about being a good citizen and cleaning up your caches, but I didn’t include that in the talk because it isn’t exactly exciting: “Hey everyone! Now we’re going to do some housekeeping—yay!”
  • Isn’t there potential for abuse here? This is related to the previous question, and it’s another great question to ask of any technology. In short, yes. Bad actors could use service workers to fill up caches uneccesarily. I’ve written about back door service workers too, although the real problem there is with iframes rather than service workers—iframes and cookies are technologies that are already being abused by bad actors, and we’re going to see more and more interventions by ethical browser makers (like Mozilla) to clamp down on those technologies …just as browsers had to clamp down on the abuse of pop-up windows in the early days of JavaScript. The cache API could become a tragedy of the commons. I liken the situation to regulation: we should self-regulate, but if we prove ourselves incapable of that, then outside regulation (by browsers) will be imposed upon us.
  • What kind of things are in the future for service workers? Excellent question! If you think about it, a service worker is kind of a conduit that gives you access to different APIs: the Cache API and the Fetch API being the main ones now. A service worker is like an airport and the APIs are like the airlines. There are other APIs that you can access through service workers. Notifications are available now on desktop and on Android, and they’ll be coming to iOS soon. Background Sync is another powerful API accessed through service workers that will get more and more browser support over time. The great thing is that you can start using these APIs today even if they aren’t universally supported. Then, over time, more and more of your users will benefit from those enhancements.

If you attended the talk and want to learn more about about service workers, there’s my book (obvs), but I’ve also written lots of blog posts about service workers and I’ve linked to lots of resources too.

Finally, here’s a list of links to all the books, sites, and articles I referenced in my talk…

Books

Sites

Progressive Web Apps

Move Fast and Don’t Break Things by Scott Jehl

Scott Jehl is speaking at An Event Apart in Seattle—yay! His talk is called Move Fast and Don’t Break Things:

Performance is a high priority for any site of scale today, but it can be easier to make a site fast than to keep it that way. As a site’s features and design evolves, its performance is often threatened for a number of reasons, making it hard to ensure fast, resilient access to services. In this session, Scott will draw from real-world examples where business goals and other priorities have conflicted with page performance, and share some strategies and practices that have helped major sites overcome those challenges to defend their speed without compromises.

The title is a riff on the “move fast and break things” motto, which comes from a more naive time on the web. But Scott finds part of it relatable. Things break. We want to move fast without breaking things.

This is a performance talk, which is another kind of moving fast. Scott starts with a brief history of not breaking websites. He’s been chipping away at websites for 20 years now. Remember Positioning Is Everything? How about Quirksmode? That one's still around.

In the early days, building a website that was "not broken" was difficult, but it was difficult for different reasons. We were focused on consistency. We had deal with differences between browsers. There were two ways of dealing with browsers: browser detection and feature detection.

The feature-based approach was more sustainable but harder. It fits nicely with the practice of progressive enhancement. It's a good mindset for dealing with the explosion of devices that kicked off later. Touch screens made us rethink our mouse and hover-centric matters. That made us realise how much keyboard-driven access mattered all along.

Browsers exploded too. And our data networks changed. With this explosion of considerations, it was clear that our early ideas of “not broken” didn’t work. Our notion of what constituted “not broken” was itself broken. Consistency just doesn’t cut it.

But there was a comforting part to this too. It turned out that progressive enhancement was there to help …even though we didn’t know what new devices were going to appear. This is a recurring theme throughout Scott’s career. So given all these benefits of progressive enhancement, it shouldn’t be surprising that it turns out to be really good for performance too. If you practice progressive enhancement, you’re kind of a performance expert already.

People started talking about new performance metrics that we should care about. We’ve got new tools, like Page Speed Insights. It gives tangible advice on how to test things. Web Page Test is another great tool. Once you prove you’re a human, Web Page Test will give you loads of details on how a page loaded. And you get this great visual timeline.

This is where we can start to discuss the metrics we want to focus on. Traditionally, we focused on file size, which still matters. But for goal-setting, we want to focus on user-perceived metrics.

First Meaningful Content. It’s about how soon appears to be useful to a user. Progressive enhancement is a perfect match for this! When you first make request to a website, it’s usually for a web page. But to render that page, it might need to request more files like CSS or JavaScript. All of this adds up. From a user perspective, if the HTML is downloaded, but the browser can’t render it, that’s broken.

The average time for this on the web right now is around six seconds. That’s broken. The render blockers are the problem here.

Consider assets like scripts. Can you get the browser to load them without holding up the rendering of the page? If you can add async or defer to a script element in the head, you should do that. Sometimes that’s not an option though.

For CSS, it’s tricky. We’ve delivered the HTML that we need but we’ve got to wait for the CSS before rendering it. So what can you bundle into that initial payload?

You can user server push. This is a new technology that comes with HTTP2. H2, as it’s called, is very performance-focused. Just turning on H2 will probably make your site faster. Server push allows the server to send files to the browser before the browser has even asked for them. You can do this with directives in Apache, for example. You could push CSS whenever an HTML file is requested. But we need to be careful not to go too far. You don’t want to send too much.

Server push is great in moderation. But it is new, and it may not even be supported by your server.

Another option is to inline CSS (well, actually Scott, this is technically embedding CSS). It’s great for first render, but isn’t it wasteful for caching? Scott has a clever pattern that uses the Cache API to grab the contents of the inlined CSS and put a copy of its contents into the cache. Then it’s ready to be served up by a service worker.

By the way, this isn’t just for CSS. You could grab the contents of inlined SVGs and create cached versions for later use.

So inlining CSS is good, but again, in moderation. You don’t want to embed anything bigger than 15 or 20 kilobytes. You might want separate out the critical CSS and only embed that on first render. You don’t need to go through your CSS by hand to figure out what’s critical—there are tools that to do this that integrate with your build process. Embed that critical CSS into the head of your document, and also start preloading the full CSS. Here’s a clever technique that turns a preload link into a stylesheet link:

<link rel="preload" href="site.css" as="style" onload="this.rel='stylesheet'">

Also include this:

<noscript><link rel="stylesheet" href="site.css"></noscript>

You can also optimise for return visits. It’s all about the cache.

In the past, we might’ve used a cookie to distinguish a returning visitor from a first-time visitor. But cookies kind of suck. Here’s something that Scott has been thinking about: service workers can intercept outgoing requests. A service worker could send a header that matches the current build of CSS. On the server, we can check for this header. If it’s not the latest CSS, we can server push the latest version, or inline it.

The neat thing about service workers is that they have to install before they take over. Scott makes use of this install event to put your important assets into a cache. Only once that is done to we start adding that extra header to requests.

Watch out for an article on the Filament Group blog on this technique!

With performance, more weight doesn’t have to mean more wait. You can have a heavy page that still appears to load quickly by altering the prioritisation of what loads first.

Web pages are very heavy now. There’s a real cost to every byte. Tim’s WhatDoesMySiteCost.com shows that the CNN home page costs almost fifty cents to load for someone in America!

Time to interactive. This is is the time before a user can use what’s on the screen. The issue is almost always with JavaScript. The page looks usable, but you can’t use it yet.

Addy Osmani suggests we should get to interactive in under five seconds on a 3G network on a median mobile device. Your iPhone is not a median mobile device. A typical phone takes six seconds to process a megabyte of JavaScript after it has downloaded. So even if the network is fast, the time to interactive can still be very long.

This all comes down to our industry’s increasing reliance on JavaScript just to render content. There seems to be pendulum shifts between client-side and server-side rendering. It’s been great to see libraries like Vue and Ember embrace server-side rendering.

But even with server-side rendering, there’s still usually a rehydration step where all the JavaScript gets parsed and that really affects time to interaction.

Code splitting can help. Webpack can do this. That helps with first-party JavaScript, but what about third-party JavaScript?

Scott believes easier to make a fast website than to keep a fast website. And that’s down to all the third-party scripts that people throw in: analytics, ads, tracking. They can wreak havoc on all your hard work.

These scripts apparently contribute to the business model, so it can be hard for us to make the case for removing them. Tools like SpeedCurve can help people stay informed on the impact of these scripts. It allows you to set up performance budgets and it shows you when pages go over budget. When that happens, we have leverage to step in and push back.

Assuming you lose that battle, what else can we do?

These days, lots of A/B testing and personalisation happens on the client side. The tooling is easy to use. But they are costly!

A typical problematic pattern is this: the server sends one version of the page, and once the page is loaded, the whole page gets replaced with a different layout targeted at the user. This leads to a terrifying new metric that Scott calls Second Meaningful Content.

Assuming we can’t remove the madness, what can we do? We could at least not do this for first-time visits. We could load the scripts asyncronously. We can preload the scripts at the top of the page. But ideally we want to move these things to the server. Server-side A/B testing and personalisation have existed for a while now.

Scott has been experimenting with a middleware solution. There’s this idea of server workers that Cloudflare is offering. You can manipulate the page that gets sent from the server to the browser—all the things you would do for an A/B test. Scott is doing this by using comments in the HTML to demarcate which portions of the page should be filtered for testing. The server worker then deletes a block for some users, and deletes a different block for other users. Scott has written about this approach.

The point here isn’t about using Cloudflare. The broader point is that it’s much faster to do these things on the server. We need to defend our user’s time.

Another issue, other than third-party scripts, is the page weight on home pages and landing pages. Marketing teams love to fill these things with enticing rich imagery and carousels. They’re really difficult to keep performant because they change all the time. Sometimes we’re not even in control of the source code of these pages.

We can advocate for new best practices like responsive images. The srcset attribute on the img element; the picture element for when you need more control. These are great tools. What’s not so great is writing the markup. It’s confusing! Ideally we’d have a CMS drive this, but a lot of the time, landing pages fall outside of the purview of the CMS.

Scott has been using Vue.js to make a responsive image builder—a form that people can paste their URLs into, which spits out the markup to use. Anything we can do by creating tools like these really helps to defend the performance of a site.

Another thing we can do is lazy loading. Focus on the assets. The BBC homepage uses some lazy loading for images—they blink into view as your scroll down the page. They use LazySizes, which you can find on Github. You use data- attributes to list your image sources. Scott realises that LazySizes is not progressive enhancement. He wouldn’t recommend using it on all images, just some images further down the page.

But thankfully, we won’t need these workarounds soon. Soon we’ll have lazy loading in browsers. There’s a lazyload attribute that we’ll be able to set on img and iframe elements:

<img src=".." alt="..." lazyload="on">

It’s not implemented yet, but it’s coming in Chrome. It might be that this behaviour even becomes the default way of loading images in browsers.

If you dig under the hood of the implementation coming in Chrome, it actually loads all the images, but the ones being lazyloaded are only sent partially with a 206 response header. That gives enough information for the browser to lay out the page without loading the whole image initially.

To wrap up, Scott takes comfort from the fact that there are resilient patterns out there to help us. And remember, it is our job to defend the user’s experience.

How to Think Like a Front-End Developer by Chris Coyier

Alright! It’s day two of An Event Apart in Seattle. The first speaker of the day is Chris Coyier. His talk is called How to Think Like a Front-End Developer. From the website:

The job title “front-end developer” is very real: job boards around the world confirm that. But what is that job, exactly? What do you need to know to do it? You might think those answers are pretty cut and dried, but they’re anything but; front-end development is going through something of an identity crisis. In this engaging talk, Chris will explore this identity through the lens of someone who has self-identified as a front-end developer for a few decades, but more interestingly, through many conversations he’s had with other successful front-end developers. You’ll see just how differently this job can be done and how differently people and companies can think of this role—not just for the sake of doing so, but because you’ll learn to be better at your own jobs by understanding how other people are good at theirs.

I’m going to see if I can keep up with Chris’s frenetic pace…

Chris has his own thoughts about what front-end dev is but he wants to share other ideas too. First of all, some grammar:

I work as a front-end developer.

I work on the front end.

Those are correct. These are not:

I work as a front end developer.

I work on the front-end.

And this is just not a word:

Frontend.

Lots of people are hiring front-end developers. So it’s definitely a job and a common job title. But what does it mean. Chris and Dave talked to eight different people on their Shop Talk Show podcast. Some highlights:

Eric feels that the term “front-end developer” is newer than the CSS Zen Garden. Everyone was a webmaster, or as we’d say now, a full-stack developer. But if someone back then used the term “front-end developer”, he’d know what it meant.

Mina says it deals with things you can see. If it’s a user-facing interface, that’s front-end development.

Trent says that he thinks of himself as a web designer and web builder. He doesn’t feel he has the deep expertise of a developer, and yet he spends all of his time in the browser.

So our job is in the browser. You deal with the browser (moreso than other roles). And by the way, there are a lot of browsers out there.

Maybe the user is what differentiates front-end work. Monica says that a back-end developer is allowed not to care about the user if their job is putting a database together. It’s totally fine not to call yourself a front-end developer, but if you do, you need to care about the user.

There are tons of different devices and browsers. It’s overwhelming. So we just gave up.

So, a front-end developer:

  • Is a job and a job title.
  • It deals with browsers, devices, and users.
  • But what skills does it involve?

It’s taken for granted that you can use a computer. There’s also the soft skills of interacting with co-workers. Then there are the language-specific core skills. Finally, there are the bonus skills—all the stuff that makes you you.

Core skills

The languages you need to strongly understand to read, write and maintain them.

HTML and CSS. Definitely. You don’t come across front-end developers who don’t do those languages. But what about JavaScript?

Eric says it’s fine if you know lots of JavaScript but it’s also fine if you don’t write everything from scratch. But you can’t be oblivious to it. You need to understand what it can do.

So let’s put JavaScript into the bucket of core skills too.

Peggy believes that as a front-end developer you need to have a basic proficiency in accessibility too. This is, after all, about user-facing interfaces.

Bonus skills

The Figma team have a somewhat over-engineered graphic of all the skillsets that people might have, between “baseline” and “supplementary”.

Perhaps we all share a common trunk of skills, and then we branch in different directions.

Right now though, it feels like front-end development is having an identity crisis. It’s all about JavaScript, which is eating the planet.

JavaScript

JavaScript is crazy popular now. It’s unignorable. Yes, it’s the language in the browser, but now it’s also the language in loads of other places too.

Steven Davis says maybe we need to fork the term front-end development. Maybe we need to have UX engineers and JavaScript engineers. Can one person be great at both? Maybe the trunk of skills forks in two very different directions.

Vernon Joyce called this an identity crisis. The concepts in JavaScript frameworks are very alien to people with a background in HTML, CSS, and basic interactive JavaScript.

You could imagine two people called front-end developers meeting, and having nothing in common to talk about. Maybe sports.

Brad says he doesn’t want to be configuring build tools. He thinks of himself as being at the front of front-end development, whereas other people are at the back of front-end development.

This divide is super frustrating to people right now.

Hiring

Michael Schnarnagl brings up the point about how it’s affecting hiring. Back-end developers are being replaced with JavaScript engineers. Lots of things that used to be back-end tasks are now happening on the client side. Component-driven design, site-level architecture, routing, getting data from the back end, mutating data, talking to APIs, and managing state—all of those things are now largely a front-end concern.

Let’s look at CodePen. There’s a little heart icon on each pen. It’s an icon component. And the combination of the heart and the overall count is also a component. And the bar of items altogether—that’s also a component. And the pen it sits under is a component. And the page it’s in is a component. And the URL for that page is a component. Now the whole site is a front-end developer’s concern.

In the past, a front-end developer would ask a back-end developer for an API endpoint. Now with GraphQL, the front-end developer can craft a query to get exactly what they need. Sure, the GraphQL stuff had to be set up in the first place, but that’s one-time task. Once it’s set up, the front-end developer has everything they need.

All the old work hasn’t gone away either. Semantics, accessibility, styling—that’s still the work of a front-end developer as well as all of the new stuff listed above.

Hiring is a big part of this. Lara Schenk talks about going for an interview where she met 90% of the skills listed. Then in an interview, she was asked to do a fizzbuzz test. That’s not the way that Lara thinks. She would’ve been great for that job, but this single task derailed her. She wrote about it, and got snarky comments from people who thought she should’ve been able to do the task. But Lara’s main point was the mismatch between what was advertised and what was actually being hired for.

You see a job posting for front-end developer. Who is that for? Is it for someone into React, webpack, and GraphQL? Or is it for someone into SVG, interaction design, and accessibility? They’re both front-end developers. And remember, they can learn one another’s skills, but when it comes to hiring, it has to be about the skills people have right now.

Peggy talks about how specialised your work can be. You can specialise in SVG. You can specialise in APIs and data.

We’re probably not going to solve this right now. The hiring part is definitely the worst part right now. One solution is to use plain language in job posts. Make it clear what you’re looking for right now and explain what background you’re coming from. Use words instead of a laundry list of requirements.

Heydon Pickering talks about full-stack developers. Their core skills are hardcore computer science skills.

Brad Frost concurs. It tends not to be the other way around. The output tends to be the badly-sketched front of the horse.

Even if there is a divide, that doesn’t absolve any of us from doing a good job. That’s true whether it’s computer science tasks or markup and CSS.

Despite the divide, performance, accessibility, and user experience are all our jobs.

Maybe this term “front-end developer” needs rethinking.

The brain game

Let’s peak into the minds of very different front-end developers. Chris and Dave went to Dribbble, pulled up a bunch of designs and put them in front of their guests on the Shop Talk Show.

Here’s a design of a page.

  • Brad looks at the design and sees a lot of components of different sizes and complexity.
  • Mina sees a bunch of media objects.
  • Eric sees HTML structures. That’s a heading. That’s a list. Over there is an unordered list.
  • Sam sees a lot of typography. She sees a type system.
  • Trent immediately starts thinking about how the design is supposed to work in different screen sizes.

Here’s a different, more image-heavy design.

  • Mina would love to tackle the animations.
  • Trent sees interesting textures and noise. He wonders how he could achieve those effects without exporting giant image files.
  • Brad, unsurprisingly, sees components, even in a seemingly bespoke layout.
  • Eric immediately sees a lot of SVG.
  • Sam needs to know what the HTML is.

Here’s a more geometric design.

  • Sam is drawn to the typography.
  • Mina sees an opportunity to use writing modes.
  • Trent sees a design that would reflow and reshape itself well.
  • Eric sees something with writing mode, grid, and custom fonts.

Here’s a financial mobile UI.

  • Trent wants to run it through a colour-contrast analyser, and he wants to know if the font size is too small.

Here’s a crazy festival website.

  • Mina wonders if it needs a background video, but worries about the performance.

Here’s an on-trend mobile design.

  • Monica sees something that looks like every other website.
  • Ben wonders whether it will work in other parts of the world. How will the interactions work? Separate pages or transitions? How will it feel?

Here’s an image-heavy design.

  • Monica wonders about the priority of which images to load first.

Here’s an extreme navigation with big images.

  • Ben worries about the performance on slow connections.
  • Monica gets stressed out about how much happens when you just click on a link.
  • Peggy sees something static and imagines using Gatsby for it.

Here’s a design that’s map-based.

  • Ben worries about the size of the touch targets.
  • Monica sees an opportunity to use SVGs.

Here’s a card UI.

  • Ben wonders what the browser support is. Can we use CSS grid or do we have to use something older?
  • Monica worries that this needs drag’n’drop. Now you’ve got a nightmare scenario.

Chris has been thinking about and writing about this topic of what makes someone a front-end developer, and what makes someone a good front-end developer. The debate will continue…

Designing Intrinsic Layouts by Jen Simmons

Alright, it’s time for the final talk of day one of An Event Apart in Seattle. The trifecta of CSS talks is going to finish with Jen Simmons talking about Designing Intrinsic Layouts. Here’s the description:

Twenty-five years after the web began, we finally have a real toolkit for creating layouts. Combining CSS Grid, Flexbox, Multicolumn, Flow layout and Writing Modes gives us the technical ability to build layouts today without the horrible hacks and compromises of the past. But what does this mean for our design medium? How might we better leverage the art of graphic design? How will we create something practical, useable, and realistically doable?

In a talk full of specific examples, Jen will walk you through the thinking process of creating accessible & reusable page and component layouts. For the last four years, Jen’s been getting audiences excited about what, when, and why. Now it’s time for how.

I’m not sure if live-blogging is going to work given the visual nature of this talk, but I’ll give it a try…

How many of us have written CSS using display: grid? Quite a few. How many people feel they have a good grip on it? Not so many.

Jen has spent the last few years encouraging people to really push the boundaries of graphic design on the web now that we have the tools to do so in CSS. But Jen is not here today to talk about amazing new things. Instead she’s going to show the “how?” The code is on labs.jensimmons.com.

Let’s look at laying out a header. You might have a header element with a logo image, the site name in an h1, and a nav element with the navigation. The logo, the site name, and the navigation are direct children of the parent header. By default we get everything stacked vertically.

<header role="banner">
    <img class="logo" src="..." alt="...">
    <h1>Site title</h1>
    <nav>
        <ul>
            <li><a href="...">Home</a></li>
            <li><a href="...">Episodes</a></li>
            <li><a href="...">Guests</a></li>
            <li><a href="...">Subscribe</a></li>
            <li><a href="...">About</a></li>
        </ul>
    </nav>
</header>

Why should we care about starting with semantic HTML? It matters for reusability and accessibility, but also for reader modes in browsers. These tools remove the cruft. If you mark up your content well, it will play nicely with reader modes. Interestingly, there are no metrics around how many people are using reader modes (by design). Mozilla has a product called Pocket that’s a “read later” app. It can also turn saved articles into a podcast for you. Well marked up content matters for audio playback.

Now let’s start applying some CSS. Fonts, colours, stuff like that. Everything is still stacking vertically though. It’s flow content. This can be our fallback layout. Now let’s apply our own layout. We could use float: left on the logo. Now we need some margins. We can try applying widths and floats to the h1 and the nav. Now we need a clearfix to get the parent to stretch to the full height of the content. It’s hacky. floats suck. But it’s all we had until we got flexbox. But even using flexbox for this kind of layout is a hack too. What we really need is CSS grid.

Apply display: grid to the header. Use Firefox’s dev tools to inspect the grid. Seeing the grid helps in understanding what’s going on. We’ve got three grid items in three separate rows. Notice that we don’t have margin collapsing any more. We can get rid of margins and use grid-gap instead. But we want is three columns, not three rows. We’ll try:

grid-template-columns: 1fr 1fr 1fr;

Looks okay. Not exactly the spacing we want though. We want the logo and the navigation to take up less space than the site name. We could translate our old percentage values into fr equivalents. 8% becomes 8fr. 75% becomes 75fr. 17% becomes 17fr. But the logo and the nav never shrink below their actual size. Layout isn’t working like we’re used to. The content will never get smaller than the minimum content size. So the amount of space assigned to each column is no longer linear.

Let’s change those fr units to percentages. Now we need to get rid of our gaps. But now when the layout gets small, everything squishes up. This is what we need breakpoints for. But now we can do something else. Let’s make the logo max-content. That column will now be the size of the logo. The other columns can remain `fr:

header {
    grid-template columns: max-content 3fr 1fr;
}

Let’s also define the last one—the navigation—as max-content. We can toss auto in for the middle content (the site name):

header {
    grid-template columns: max-content auto max-content;    }

Let’s layout the navigation horizontally. The best tool for this is flexbox. The li elements are the flex items. So the ul needs to be the flex container.

ul {
    display: flex;
}

Looking good. Let’s allow items to wrap onto another line:

ul {
    display: flex;
    flex-wrap: wrap;
}

Let’s change that navigation from max-content to auto so that it doesn’t get too long:

header {
    grid-template columns: max-content auto auto;   }

(How you want the navigation items to wrap determines whether you use flexbox or grid. Both are perfectly valid choices. There’s nothing wrong with using grid for the small stuff.)

Just look at how little code we need for this layout!

Let’s try a different layout. We’ll put the navigation on a new row.

header {
    grid-template-columns: min-content auto;
    grid-template-rows: auto auto
}

Let’s get the logo to span across both rows:

.logo {
    row-span: 2;
}

Need to finesse the alignment of the navigation? No problem. Play with align-self property on the nav element.

Again, look at how little code this is!

But these layouts are safe. We need to break out of our habits. What about disjointed text? Let’s take the h1. Apply some typography and colour. Also apply overflow-wrap: anywhere. Now the text can break within words.

We can take this further. But to do that, we have to wrap all of my letters in separate spans. Yuck!

Apply display: grid to the h1 that contains all those span children. Say grid-template-columns: repeat(8, 1fr) to make an eight-column repeating grid.

Let’s make it more interesting. We can target each individual letter with grid-column and grid-row. There are many ways to tell the browser where to place grid items. We can tell them the start lines. By default it will take up one cell.

Let’s add some images. Let’s rotate items. Place items wherever we want them. Mess around with the units to see what happens. Play with opacity when elements overlap. See the possibilities!

But, people cry, what about Internet Explorer? Use @supports

/* code for every browser */

@supports (display: grid) {
    /* code for modern browsers */
}

Set up a fallback outside the @supports block. Toggle grid on and enough in dev tools to see how the fallback will look. If this kind of thinking is new to you, please watch youtube.com/layoutland where Jen talks about resilient CSS.

Let’s try something else. Jen got an email announcement for an event. It had an interesting bit of layout with some text overlapping an image.

Mark up the content: some headings and images. By default the images are displayed inline. The headings are displayed as blocks.

Think about where your grid lines will need to go. How many lines will you need? How about setting your columns with 1fr 3fr 3fr 1fr? Now how many rows do you need? You define and the grid on the container and tell the items where to go. Again, it’s not much code. Tweak it. How about setting the columns to be 1fr minmax(100px, 400px) min-content? You have to mess around to see what’s possible. You can use all sorts of units for columns: fixed lengths (pixels, ems, rems), min-content, max-content, percentages, fr units, minmax(), and auto. Play around with the combinations.

Jen shows a whole bunch of her demos. Check them out. Use web inspector to play around with them.

And with that, the first day of An Event Apart Seattle is done!

Making Things Better: Redefining the Technical Possibilities of CSS by Rachel Andrew

The CSStastic afternoon of day one of An Event Apart in Seattle continues with Rachel Andrew. Her talk is Making Things Better: Redefining the Technical Possibilities of CSS. The description reads:

For years we’ve explained that the web is not like print; that a particular idea is not how things work on the web; that certain things are simply not possible. Over the last few years, rapid browser implementation of advances in CSS have given us the ability to do many of these previously impossible things. We can use our new powers to build the same designs faster, or we can start to ask ourselves what we might do if we were solving these problems afresh.

In this talk, Rachel will look at the things coming into browsers right now which change the way we see web design. CSS subgrids allowing nested grids to use the track definition of their parent; logical properties and values moving the web away from the physical dimensions of a computer screen; screen experiences which behave more like an app, or even paged media, due to scroll snapping and multidimensional control. By understanding the new medium of web design we can start to imagine the future, and even help to shape it.

I’m not sure if it even makes sense to try to live-blog a code talk, but I’ll give it my best shot…

Rachel has been talking about CSS at An Event Apart for over three years now. Our understanding of the possibilities of CSS has changed a lot in that time. Our use of floats for layout is being consigned to history. It’s no less monumental than the change from tables to CSS. Tableless web design often meant simplifying our designs. We were used to designing in a graphic design tool and then slicing it up into table cells. CSS couldn’t give us the same fine-grained control so we simplified our designs. It got us to start thinking of the web as its own medium. That idea really progressed with responsive web design.

But perhaps us CSS advocates downplayed some of the issues. We weren’t trying to create new CSS, we were just trying to get people to use CSS.

What we term “good web design” is based in the technical limitations of CSS. We say “the web is not print” when we see a design that’s quite print-like. People expect to have to hack at CSS to get it to do what we want. But times have changed. We have solved many of those problems (but that doesn’t mean we got all of them!).

Rachel spends a lot of time telling designers: you never know how tall anything is on the web. It used to be a real challenge to get the top and bottom of boxes to line up. We’d have to fix the height of the boxes. And if too much content goes in, the content overflows. Then we end up limiting the amount of content at the CMS side. We hacked around the problem. A technical limitation influenced our design, and even our content management. Then we got flexbox. Not only did the problem disappear, but the default behaviour is exactly what we struggled with for years: equal height columns.

How big is this box? You’ve seen the “CSS is awesome mug”, right?

Our previous layout systems relied on percentage lengths for widths. Those values had to add up to 100%, and no more. People tried to do the same thing with flexbox. People made “grid systems” with flexbox that gave widths to everything. “Flexbox is weird!”, people said. But the real problem is that flexbox is not the layout method you think it is. It’s for taking a bunch of oddly-sized things and returning the most reasonable layout for those things. It assigns space in a smart way. That solves the problem of needing to give everthing a width. It figures it out for you. If you decide to put widths on everything, you’re kind of working against flexbox.

We’re so used to having to hack everything in CSS, we had to take a step back with flexbox and realise that hacks aren’t necessary.

CSS tries to avoid data loss. That’s why the “CSS is awesome” text overflows the box. You don’t want the text to vanish. Visible overflow is messy, but it’s better than making some content disappear.

In the box alignment specification, there’s the concept of safe and unsafe alignment. Safe:

.container {
    display: flex;
    flex-direction: column;
    align-items: safe center;
}

You give the browser permission to align items to the start if necessary. But you can override that with unsafe:

.container {
    display: flex;
    flex-direction: column;
    align-items: unsafe center;
}

Overflow is going to happen. Now it’s up to you what happens when it does.

The “content honking out of the box” problem described in the “CSS is awesome” meme is now controlled with min-content:

width: min-content;

The box expands to encompass the widest content.

You have many choices. But what if the text isn’t running left to right? It might not be a problem we run into for English text. For years, CSS had that English-centric assumption baked in. Now CSS has been updated to not make that assumption. The web is not left to right. Flexbox and grid take an agnostic approach to the writing mode of the document. There’s no “top”, “bottom”, “left” or “right”. There’s “start”, “end”, “inline” and “block”. Now we have a new spec for logical properties and values. It maps old physical values (top, bottom, etc.) to the newer agnostic values.

So even if you use writing-mode to flip direction, width is still a width. Use inline-size instead of width and everything keeps working: the width maps to height when you apply a different writing-mode value. Eventually we’ll use those flow-relative values more than the old values. Solutions need to include different writing modes.

There is no fold. We’ve said that for years, right? But we know where the fold is. We’ve got viewport units that represent the width and height of the browser viewport. We can start to make designs that make use of the viewport. You can size a screen full of images exactly to fit the visible space. Combine it with scroll-snap to get the page views to snap as the user scrolls. You get paged layout. That’s interesting for Rachel because she’s used to designing for paged layout in print versus continuous layout on the web.

What’s next for CSS grid? Grid layout has been the biggest problem-solver of recent years. But that doesn’t mean it has solved all the layout problems. New problems appear as we start to work with CSS grid. We often end up nesting grids. But the nested grids don’t have any knowledge of one another. We’re back to: you never know how tall things are on the web. We need a way to have a relationship. Some kind of, I don’t know, subgrid.

You could use display: contents. It removes a box from the visual display allowing grandchildren to act like children. The browser support is good, but there’s a stonking accessibility bug so don’t use it in production. Also you can’t apply visual styles to anything that’s got display: contents on it. But grid-template-rows: subgrid will solve this problem. The spec is in a good shape. We’re waiting for the first browser implementations.

You will hit problems. Find new technical limitations. It’s just that we can’t do that stuff yet. We get the new stuff when we create it. Write up the problems you come across.

We’re finding the edges. Rachel is going to share her problems.

Rachel wants to put some text into her image grid. No problem. But then if there’s too much text, it might not fit in a height-restricted row. We can adjust the row to not be height-restricted, but then we lose the nice viewport-fitting layout.

In continuous media—which is what the web is—content inside multicol gets longer and longer. You can fix the height of the container but then the columns get created horizontally. What if you could say, I want my multicol container to be, say 100vh high, but if the content overlows, create a new 100vh high container below. Overflow in the block dimension. Maybe that’ll be in the next version of the multicol spec.

Multicol doesn’t solve Rachel’s image grid situation. What she needs is a way for the text to fill up a box and then flow into another box. The content needs to be semantically marked up—not broken into separate chunks for layout—and we want the browser to figure out where to break that content and fill up available space.

It comes as a surprise to people that a lot of paged media—books, magazines—are laid out with CSS. It’s in the paged media module. Prince is a good example of a user agent that supports paged media. There’s the concept of a page box: a physical page into which content can go. You get to define the boxes with physical dimensions like inches or centimeters. You create a bunch of margin boxes with generated content. Enough pages are created to hold all the content. You create your page model and flow the content through it.

Maybe apps and websites with defined screens are not that different from paged contexts. There have been attempts to create CSS specs that would allow this kind of content-flowing behaviour. CSS regions was one attempt. There was -ms-flow-into and -ms-flow-from in the IE and Edge implementations. You had to apply -ms-flow-into to an iframe element.

Regions needs ready-prepared boxes for the content to flow into. But how can you know how many boxes to prepare in advance? You don’t know how big things will be on the web. Rachel has been told that there’s nothing wrong with the CSS regions spec because you can define a final bucket for all leftover content. That doesn’t seem like a viable solution.

CSS regions predated CSS grid and didn’t take off. Now that we’ve got grid, something like regions makes a lot of sense.

Web design has been involved in a constant battle with overflow. Whether it’s overflowing boxes, or there (not) being a fold, or multicol layout. Rachel thinks we can figure out a way to get regions to work. Perhaps regions paved the way for something better. Maybe it was just ahead of its time. There are a lot of things hidden away in CSS specs that never made it out: things that didn’t make sense until more advanced CSS came along.

Regions—like multicol—relies on fragmentation. Ever tried to stop a heading behind the last thing on a page in a print stylesheet? We need good support for break-inside, break-before, and break-after.

We create new things to solve problems. Maybe you don’t see the value of something like regions, but I bet there’s been something where you thought “I wish CSS could do this!”.

Rob wrote up a problem that he had with trying to have a floated element maintain its floatiness inside a grid. He saw it as a grid problem. Rachel saw it as an exclusions problem. Rob’s write-up was really valuable to demonstrate the need for exclusions. Writing things up is hugely valuable for pushing things forward. Write up your ideas—they’ll show us the use case.

Ask “why can’t I do that?” Let’s not fall into the temptation of making things grid-like just because we have CSS grid now. Keep pushing at the boundaries.

Many of things Rachel has shown—grid, exclusions, regions—were implemented by Microsoft. With Edge moving to a Chromium rendering engine, we must make sure that we maintain diversity of thought in the standards process. Voices other than those of rendering engines need to contribute to the discussion.

At a W3C meeting or standards discussion, the room should not be 60-70% Googlers.

More than ever, the web needs diversity of thought. Rachel isn’t having a dig at Google. This isn’t a fight between good and evil. It’s a fight against any monoculture. So please contribute. Get involved. Together we can work for the future of the web platform.

Generation Style by Eric Meyer

It’s time for the afternoon talks at An Event Apart in Seattle. We’re going to have back-to-back CSS, kicking off with Eric Meyer. His talk is called Generation Style. The blurb says:

Consider, if you will, CSS generated content. We can, and sometimes even do, use it to insert icons before or after pieces of text. Occasionally we even use it add a bit of extra information. And once upon a time, we pressed it into service as a hack to get containers to wrap around their floated children. That’s all fine—but what good is generated content, really? What can we do with it? What are its limitations? And how far can we push content generation in a new landscape full of flexible boxes, grids, and more? Join Eric as he turns a spotlight on generated content and shows how it can be a generator of creativity as well as a powerful, practical tool for everyday use.

Wish me luck, ‘cause I’m going to try to capture the sense of this presentation…

So we had a morning of personas and user journeys. This afternoon: code, baby! Eric is going to dive into a very specific corner of CSS—generated content. For an hour. Let’s do it!

He shows the CSS Generated Content Module Level 3. Eric wants to focus on one bit: the pseudo-elements ::before and ::after. What does pseudo-element mean?

You might have used one of these pseudo-elements for blockquotes. Perhaps you’ve put a great big quotation mark in front of them.

blockquote:: after {
    content: "“";
    font-size: 4em;
    opacity: 0.67;
/* placement styles here */
}

Why is Eric using ::after? Because you can. You can put the ::after content wherever you want. But if your placement styles fail, this isn’t a good place for the generated content. So don’t do this. Use ::before.

Another example of using generated content is putting icons beside certain links:

a[href$=".pdf"]::after {
    content: url(i/icon.png);
    height: 1em;
    margin-right: 0.5em;
    vertical-align: top;
}

But these icons look yucky. But if you use larger images, they will be shown full size. You only have so much control over what happens in there. I mean, that’s true of all CSS: think of CSS as a series of strong suggestions. But here, we have even less control than we’re used to. Why isn’t the image 1em tall like I’ve specified in the CSS? Well, the generated content box is 1em tall but the image is breaking out of this box. How about this:

a[href]::after * {
    max-width: 100%;
    max-height: 100%
}

This doesn’t work. The image isn’t an element so it can’t be selected for.

The way around it is to use background images instead:

a[href$=".pdf"]::after {
    content: '';
    height: 1em; width: 1em;
    margin-right: 0.5em;
    vertical-align: top;
    background: center/contain;
    background-image: url(i/icon.png);
}

Notice there’s a right margin there. That stretches out the width of the whole link. That’s exactly the same as if there were an actual span in there:

a[href$=".pdf"] span {
    height: 1em; width: 1em;
    margin-right: 0.5em;
    vertical-align: top;
    background: center/contain;
    background-image: url(i/icon.png);
}

So why use generated content instead of a span? So that you don’t have to put extra spans in your markup.

Generated content is great for things that work great when they’re there, but still work fine if they’re not. It’s progressive enhancement.

You’ve almost certainly used generated content for the clearfix hack.

.clearfix::after {
    content: '';
    display: table;
    clear: both;
}

Ask your parents. It’s when we wanted to make the containing element for a group of floating elements to encompass the height of those elements. Ancient history, right? Well, Eric is showing an example of a certain large media company today. There are a lot of clearfixes in there.

Eric makes the clearfix visible:

.clearfix::after {
    content: '';
    display: table;
    clear: both;
    border: 10px solid purple;
}

It looks like a span: a 10 pixel wide box. Now change the display property:

.clearfix::after {
    content: '';
    display: block;
    clear: both;
    border: 10px solid purple;
}

Now it behaves more like a div than a span.

The big question here is: who cares?

Let’s say we’re making a site about corduroy pillows (I hear they’re really making headlines).

<header>
<h1>Corduroy pillows</h1>
<p>Lorum ipsum...</p>
</header>

We can add a box under the header:

header::after {
    content: " ";
    display: block;
    height: 1em;
}

You can do stuff with that extra content, like using a linear gradient:

header::after {
    content: " ";
    display: block;
    height: 1em;
    background: linear-gradient(to right, #DDD, #000, #DDD) center / 100% 1px no-repeat;
}

The colour stops are #DDD, #000, and #DDD. You get this nice gradiated line under the header. You can chain a bunch of of radial gradients together to get some nice effects. You could mix in some background images too. Now you’ve got some on-brand separators. You could use generated content to add some “under construction” separators.

By the way, ever struggled to keep track of the order of backgrounds? Think about how you would order layers in Photoshop.

How about if we could use generated content to make design tools?

div[id]::before {
    content: attr(id);
}

Now the generated content is taken from the id attribute. You can make it look like Firebug:

div[id]::before {
    content: '#' attr(id);
    font: 0.75rem monospace;
    position: absolute;
    top: 0;
    left: 0;
    border: 1px dashed red;
    padding: 0 0.25em;
    background: #FFD;
}

You can even make the content cover the whole box with bottom and right values too:

div[id]::before {
    content: '#' attr(id);
    font: 0.75rem monospace;
    position: absolute;
    top: 0;
    left: 0;
    bottom: 0;
    right: 0;
    border: 1px dashed red;
    padding: 0 0.25em;
    background: #FFD8;
}

(And yes, that is a hex value with opacity.)

Let’s make it less code-y:

div[id]::before {
    content: attr(id);
    font: bold 1.5rem Georgia serif;
    position: absolute;
    top: 0;
    left: 0;
    bottom: 0;
    right: 0;
    border: 1px dashed red;
    padding: 0 0.25em;  
    background: #FFD8;
}

Throw in some text-shadow. Maybe some radial gradients. We’re at the wireframe stage. Let’s drop in some SVG images to show lines across the boxes.

How about automating design touches?

pre {
    padding: 0.75em 1.5em;
    background: #EEE;
    font: medium Consolas, monospace;
    position: relative;
}

Let’s say that applies to:

<pre class="css">
...
</pre>

You can generate labels with that class attribute:

pre::before {
    content: attr(class);
    display: block;
    padding: 0.25em 0 0.15em;
    font: bold 1em Noah, sans-serif;
    text-align: center;
    text-transform: uppercase;
}

Let’s align it to the top of it’s parent with negative margins:

pre::before {
    content: attr(class);
    display: block;
    padding: 0.25em 0 0.15em;
    margin: -0.75em -1.5em 1em;
    font: bold 1em Noah, sans-serif;
    text-align: center;
    text-transform: uppercase;
}

Or you can use absolute positioning:

pre::before {
    content: attr(class);
    display: block;
    padding: 0.25em 0 0.15em;
    position: absolute;
    top: 0;
    right: 0;
    left: 0;
    font: bold 1em Noah, sans-serif;
    text-align: center;
    text-transform: uppercase;
}

Now let’s change the writing mode:

pre::before {
    content: attr(class);
    display: block;
    padding: 0.25em 0 0.15em;
    position: absolute;
    top: 0;
    right: 0;
    bottom: 0;
    writing-mode: vertical-rl;
    font: bold 1em Noah, sans-serif;
    text-align: center;
    text-transform: uppercase;
}

Now the text is running down the side, but it’s turned on its side. You can transform it:

pre::before {
    content: attr(class);
    display: block;
    padding: 0.25em 0 0.15em;
    position: absolute;
    top: 0;
    right: 0;
    bottom: 0;
    writing-mode: vertical-rl;
    transform: rotate(180deg);
    font: bold 1em Noah, sans-serif;
    text-align: center;
    text-transform: uppercase;
}

But if you this, be careful. Your left margin is no longer on the left. Everything’s flipped around.

You could also update the generated content according to the value of the class attribute:

pre.css:: before {
    content: '{ CSS }';
}

pre.html::before {
    content: '< HTML >';
}

pre.js::before,
pre.javascript::before {
    content: '({ JS })();';
}

It’s presentational, so CSS feels like the right place to do this. But you can’t generate markup—just text. Angle brackets will be displayed in their raw form.

But positioning is so old-school. Let’s use CSS grid:

pre {
    display: grid;
    grid-template-columns: min-content 1fr;
    grid-gap: 0.75em;
}

pre::before {
    content: attr(class);
    margin: -1em 0;
    padding: 0.25em 0.1em 0.25em 0;
    writing-mode: vertical-rl;
    transform: rotate(180deg);
    font: bold 1em Noah, sans-serif;
    text-align: center;
    text-transform: uppercase;
}

Heck, you could get rid of the negative margins by putting the code content inside a code element and giving that a margin of 1em.

You can see generated content in action on the website of An Event Apart:

li.news::before {
    content: attr(data-cat);
    background-color: orange;
    color: white;
}

The data-cat attribute (which contains a category value) is displayed in the generated content.

Cool. That’s all stuff we can do now. What about next?

Well, suppose you had to put some legalese on your website. You could generate the numbers of nested sections:

h1 { counter-reset: section; }
h2 { counter-reset: subsection; }

Increment the numbers each time:

h2 { counter-increment: section; }
h3 { counter-increment: subsection; }

And display those values:

h2::before {
    content: counter(section) ".";
}
h2::before {
    content: counter(section) counter ":" (subsection, upper-roman);
}

Soon you’ll be able to cycle through a list of counter styles of your own creation with a @counter-style block.

But remember, if you really need that content to be visible for everyone, don’t rely on generated content: put it in your markup. It’s for styles.

So, generated content. It’s pretty cool. You can do some surprising things with it. Maybe ::before this talk, you didn’t think about generated content much, but ::after this talk ,you will.

Accessibility on The Session

I spent some time this weekend working on an accessibility issue over on The Session. Someone using VoiceOver on iOS was having a hard time with some multi-step forms.

These forms have been enhanced with some Ajax to add some motion design: instead of refreshing the whole page, the next form is grabbed from the server while the previous one swooshes off the screen.

You can see similar functionality—without the animation—wherever there’s pagination on the site.

The pagination is using Ajax to enhance regular prev/next links—here’s the code.

The multi-step forms are using Ajax to enhance regular form submissions—here’s the code for that.

Both of those are using a wrapper I wrote for XMLHttpRequest.

That wrapper also adds some ARIA attributes. The region of the page that will be updated gets an aria-live value of polite. Then, whenever new content is being injected, the same region gets an aria-busy value of true. Once the update is done, the aria-busy value gets changed back to false.

That all seems to work fine, but I was also giving the same region of the page an aria-atomic value of true. My thinking was that, because the whole region was going to be updated with new content from the server, it was safe to treat it as one self-contained unit. But it looks like this is what was causing the problem, especially when I was also adding and removing class values on the region in order to trigger animations. VoiceOver seemed to be getting a bit confused and overly verbose.

I’ve removed the aria-atomic attribute now. True to its name, I’m guessing it’s better suited to small areas of a document rather than big chunks. (If anyone has a good primer on when to use and when to avoid aria-atomic, I’m all ears).

I was glad I was able to find a fix—hopefully one that doesn’t negatively impact the experience in other screen readers. As is so often the case, the issue was with me trying to be too clever with ARIA, and the solution was to ease up on adding so many ARIA attributes.

It also led to a nice discussion with some of the screen-reader users on The Session.

For me, all of this really highlights the beauty of the web, when everyone is able to contribute to a community like The Session, regardless of what kind of software they may be using. In the tunes section, that’s really helped by the use of ABC notation, as I wrote five years ago:

One of those screen-reader users got in touch with me shortly after joining to ask me to explain what ABC was all about. I pointed them at some explanatory links. Once the format “clicked” with them, they got quite enthused. They pointed out that if the sheet music were only available as an image, it would mean very little to them. But by providing the ABC notation alongside the sheet music, they could read the music note-for-note.

That’s when it struck me that ABC notation is effectively alt text for sheet music!

Then, for those of use who can read sheet music, the text of the ABC notation is automatically turned into an SVG image using the brilliant abcjs. It’s like an enhancement that’s applied, I dunno, what’s the word …progressively.

A tiny lesson in query selection

We have a saying at Clearleft:

Everything is a tiny lesson.

I bet you learn something new every day, even if it’s something small. These small tips and techniques can easily get lost. They seem almost not worth sharing. But it’s the small stuff that takes the least effort to share, and often provides the most reward for someone else out there. Take for example, this great tip for getting assets out of Sketch that Cassie shared with me.

Cassie was working on a piece of JavaScript yesterday when we spotted a tiny lesson that tripped up both of us. The script was a fairly straightforward piece of DOM scripting. As a general rule, we do a sort of feature detection near the start of the script. Let’s say you’re using querySelector to get a reference to an element in the DOM:

var someElement = document.querySelector('.someClass');

Before going any further, check to make sure that the reference isn’t falsey (in other words, make sure that DOM node actually exists):

if (!someElement) return;

That will exit the script if there’s no element with a class of someClass on the page.

The situation that tripped us up was like this:

var myLinks = document.querySelectorAll('a.someClass');

if (!myLinks) return;

That should exit the script if there are no A elements with a class of someClass, right?

As it turns out, querySelectorAll is subtly different to querySelector. If you give querySelector a reference to non-existent element, it will return a value of null (I think). But querySelectorAll always returns an array (well, technically it’s a NodeList but same difference mostly). So if the selector you pass to querySelectorAll doesn’t match anything, it still returns an array, but the array is empty. That means instead of just testing for its existence, you need to test that it’s not empty by checking its length property:

if (!myLinks.length) return;

That’s a tiny lesson.